NetSuite’s N/llm module is a native SuiteScript 2.1 module that lets you call generative AI models directly from within your NetSuite scripts β no external APIs, no middleware, no extra authentication. Whether you want to summarize records, classify data, generate text, or build embeddings for semantic search, N/llm gives you a first-class path to do it inside the NetSuite platform.
This guide covers everything a NetSuite developer needs to know: how the module works, how to prompt AI models, how to handle responses, manage token usage, generate embeddings, and build real-world AI-powered automations in SuiteScript 2.1.
What Is the N/llm Module?
The N/llm module (Large Language Model module) is part of NetSuite’s SuiteScript 2.1 API. It provides programmatic access to generative AI capabilities from within server-side scripts such as Scheduled Scripts, Map/Reduce Scripts, Restlets, Suitelet scripts, and User Event Scripts.
The module abstracts the underlying AI model provider so you can focus on writing prompts and processing responses rather than managing authentication or HTTP calls. NetSuite handles the integration with the AI back end on your behalf.
Prerequisites
- NetSuite account with SuiteScript 2.1 enabled
- N/llm feature enabled for your account (check with your NetSuite administrator)
- Appropriate script permissions and script deployment
- Understanding of SuiteScript 2.1 module syntax (
define/require)
Loading the N/llm Module
Like any SuiteScript module, you load N/llm using the standard define syntax at the top of your script file:
/**
* @NApiVersion 2.1
* @NScriptType ScheduledScript
*/
define(['N/llm'], (llm) => {
const execute = (context) => {
// Your AI logic here
};
return { execute };
});
Making Your First AI Call with llm.generateText()
The primary function in the N/llm module is llm.generateText(). It sends a prompt to the configured AI model and returns a generated text response.
Basic Syntax
const response = llm.generateText({
prompt: 'Summarize the following customer complaint in one sentence: ' + complaintText
});
log.debug('AI Response', response.text);
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt | String | Yes | The text prompt to send to the model |
model | String / llm.Model | No | The AI model to use (defaults to the account’s configured model) |
systemPrompt | String | No | System-level instructions to shape model behavior |
maxTokens | Number | No | Maximum number of tokens in the response |
temperature | Number | No | Controls randomness (0 = deterministic, 1 = creative) |
Response Object
The generateText() function returns a response object with these key properties:
- text β The generated text from the model
- usage.inputTokens β Number of tokens in your prompt
- usage.outputTokens β Number of tokens in the response
- usage.totalTokens β Combined token count
- finishReason β Why the model stopped generating (e.g.,
stop,max_tokens)
Using System Prompts
A system prompt sets the behavior or persona of the AI model before it processes your user prompt. This is useful for ensuring consistent output format, tone, or focus area.
const response = llm.generateText({
systemPrompt: 'You are a NetSuite ERP assistant. Respond only with structured JSON. Do not include explanations.',
prompt: 'Classify the following support ticket into one of these categories: Billing, Technical, Training, Other.
Ticket: ' + ticketBody
});
const classification = JSON.parse(response.text);
log.debug('Category', classification.category);
Selecting an AI Model
The N/llm module exposes a Model enum that lets you select a specific model. You can also use the default configured model for your account by omitting the model parameter.
// Use a specific model from the Model enum
const response = llm.generateText({
model: llm.Model.CLAUDE_3_HAIKU,
prompt: 'What are the top 3 benefits of NetSuite SuiteAnalytics?'
});
log.debug('Response', response.text);
Check the official NetSuite documentation for the currently available model enum values, as the list of supported models may be updated with new NetSuite releases.
Managing Tokens and Costs
Token usage is important to monitor β both for cost control and for staying within model context limits. Every call to generateText() returns usage data you can log or store.
const response = llm.generateText({
prompt: 'Write a brief product description for: ' + productName,
maxTokens: 200
});
log.debug('Token Usage', JSON.stringify({
input: response.usage.inputTokens,
output: response.usage.outputTokens,
total: response.usage.totalTokens
}));
log.debug('Finish Reason', response.finishReason);
Setting maxTokens is a good practice to cap response length and avoid runaway token consumption, especially in loops or bulk processing scenarios.
Generating Embeddings with llm.generateEmbedding()
In addition to text generation, the N/llm module supports embeddings β numeric vector representations of text that can be used for semantic similarity, search, classification, and clustering.
Basic Embedding Call
const embeddingResult = llm.generateEmbedding({
text: 'Customer complained about delayed shipment and incorrect invoice amount.'
});
const vector = embeddingResult.embedding;
log.debug('Embedding dimensions', vector.length);
The returned embedding is a JavaScript array of floating-point numbers. You can store this vector in a custom record field or use it to compute cosine similarity between records.
Use Case: Semantic Search
A common use case for embeddings in NetSuite is building a semantic search engine over support cases, knowledge base articles, or item descriptions. You pre-compute embeddings for each record and store them, then at query time generate an embedding for the search query and find the closest matches by cosine similarity.
Real-World Use Cases
1. Automatic Case Summarization
Summarize long support case descriptions into a single sentence when a case is saved:
/**
* @NApiVersion 2.1
* @NScriptType UserEventScript
*/
define(['N/llm', 'N/record'], (llm, record) => {
const afterSubmit = (context) => {
if (context.type !== context.UserEventType.CREATE) return;
const caseRecord = context.newRecord;
const description = caseRecord.getValue({ fieldId: 'custevent_case_description' });
if (!description) return;
const response = llm.generateText({
systemPrompt: 'You summarize support cases in exactly one sentence.',
prompt: description,
maxTokens: 80
});
record.submitFields({
type: record.Type.SUPPORT_CASE,
id: caseRecord.id,
values: { custevent_ai_summary: response.text }
});
};
return { afterSubmit };
});
2. Invoice Line Classification
Classify transaction line items into GL categories using AI to reduce manual coding errors:
const classifyLineItem = (itemDescription) => {
const response = llm.generateText({
systemPrompt: 'You classify expense line items into one of these GL categories: Travel, Software, Office Supplies, Marketing, Professional Services. Reply with only the category name.',
prompt: itemDescription,
maxTokens: 20,
temperature: 0
});
return response.text.trim();
};
log.debug('GL Category', classifyLineItem('Adobe Creative Cloud annual subscription'));
3. Bulk Record Enrichment (Map/Reduce)
Use a Map/Reduce script to enrich thousands of item records with AI-generated descriptions:
/**
* @NApiVersion 2.1
* @NScriptType MapReduceScript
*/
define(['N/llm', 'N/record', 'N/search'], (llm, record, search) => {
const getInputData = () => {
return search.create({
type: 'inventoryitem',
filters: [['custitem_ai_description', 'isempty', '']],
columns: ['internalid', 'itemid', 'displayname']
});
};
const map = (context) => {
const item = JSON.parse(context.value);
context.write({ key: item.id, value: item.values.displayname });
};
const reduce = (context) => {
const itemName = context.values[0];
const response = llm.generateText({
prompt: 'Write a 2-sentence e-commerce product description for: ' + itemName,
maxTokens: 120
});
record.submitFields({
type: 'inventoryitem',
id: context.key,
values: { custitem_ai_description: response.text }
});
};
return { getInputData, map, reduce };
});
Error Handling Best Practices
Like all external calls, N/llm calls can fail due to model unavailability, token limits, or configuration issues. Always wrap your calls in try/catch blocks:
try {
const response = llm.generateText({
prompt: userPrompt,
maxTokens: 300
});
if (response.finishReason === 'max_tokens') {
log.audit('Warning', 'Response was truncated due to token limit');
}
return response.text;
} catch (e) {
log.error('LLM Error', e.message);
return null;
}
Governance and Usage Limits
N/llm calls consume SuiteScript governance units in addition to any token-based AI usage tracked by NetSuite. Keep these considerations in mind:
- Each
generateText()call has a governance cost β check the current SuiteScript governance table in Oracle’s documentation - Map/Reduce scripts are well-suited for bulk AI operations because governance resets between reduce calls
- Use
maxTokensto cap response size and avoid overshooting limits in loops - Avoid calling N/llm inside beforeLoad User Event scripts as this will slow down every page load
- Use asynchronous processing patterns (Scheduled Scripts, Map/Reduce) for non-blocking AI enrichment
N/llm Module Quick Reference
| Method | Description |
|---|---|
llm.generateText(options) | Generate text from a prompt using the configured LLM |
llm.generateEmbedding(options) | Generate a vector embedding for a given text string |
llm.Model | Enum of available AI models |
Summary
The N/llm module is one of the most powerful additions to the SuiteScript 2.1 API. It gives NetSuite developers native access to generative AI without leaving the platform β no webhooks, no OAuth flows, no external services to maintain. With generateText() and generateEmbedding(), you can automate record enrichment, classification, summarization, semantic search, and much more directly inside your existing SuiteScript workflows.
Start with a simple Scheduled Script test, then expand into production use cases like case summarization, item description generation, or intelligent GL coding. The governance model is predictable, the API is clean, and the possibilities for AI-driven automation inside NetSuite are significant.
Discover more from The NetSuite Pro
Subscribe to get the latest posts sent to your email.
Leave a Reply