π NetSuite Script Performance Tips & Best Practices
π§ Why Performance Optimization Matters
NetSuite scripts are the backbone of automation β but even a small inefficiency can cause governance timeouts, slow UI response, or record locks.
A well-optimized script can process thousands of records efficiently, while a poorly written one can fail after just a few.
This guide will help you write cleaner, faster, and more reliable SuiteScripts using real-world techniques proven across implementations.
βοΈ 1. Choose the Right Script Type
One of the biggest mistakes developers make is using the wrong script type for bulk operations.
Task Type | Recommended Script | Why |
---|---|---|
Processing 1000+ records | Map/Reduce Script | Handles large datasets with parallel execution and automatic rescheduling |
Periodic updates or cleanup | Scheduled Script | Simple and predictable for batch operations |
Record-level validation | User Event Script | Best for beforeSubmit or afterSubmit validations |
UI interaction | Client Script / Suitelet | Runs in browser context for user-driven actions |
Tip: Donβt use User Event scripts for mass record updates β use a Scheduled or Map/Reduce script instead.
βοΈ 2. Optimize Saved Search and Data Access
Common Pitfall: Running a saved search inside a loop.
Every execution counts against governance and adds latency.
β Best Practice:
- Run the search once.
- Store the results in an array.
- Use
runPaged()
only when you truly expect large datasets.
Example:
const mySearch = search.load({ id: 'customsearch_transaction_data' });
const pagedData = mySearch.runPaged({ pageSize: 1000 });
pagedData.pageRanges.forEach(range => {
const page = pagedData.fetch({ index: range.index });
page.data.forEach(result => {
// process result
});
});
βοΈ 3. Always Track Remaining Governance
Governance limits protect system resources.
If you donβt monitor them, your script can abruptly stop.
Use:
const usage = runtime.getCurrentScript().getRemainingUsage();
log.audit('Remaining governance', usage);
π‘ Pro Tip:
When remaining usage drops below ~200 units, reschedule or yield (in Map/Reduce) to continue processing safely.
βοΈ 4. Use Caching for Repeated Lookups
If your script repeatedly looks up the same records (like subsidiary, customer, or item data), use the N/cache
module.
Example:
const cache = cacheModule.getCache({ name: 'customerDataCache' });
let customerName = cache.get({ key: customerId });
if (!customerName) {
const recordObj = record.load({ type: 'customer', id: customerId });
customerName = recordObj.getValue('companyname');
cache.put({ key: customerId, value: customerName });
}
This avoids multiple API calls, saving governance.
βοΈ 5. Log Smartly (Not Excessively)
Every log.debug()
call consumes memory and counts toward global limits.
Avoid logging inside loops for every iteration.
β Best Practice:
if (debugMode) {
log.debug('Processing', `Batch: ${batchId}`);
}
π§© Control it using a script parameter or runtime check (runtime.getCurrentScript().getParameter('custscript_debug_mode')
).
βοΈ 6. Avoid Record Loads in Loops
Each record.load()
is expensive.
Instead, use:
search.lookupFields()
for single field retrieval.record.submitFields()
for lightweight updates.
Example:
record.submitFields({
type: record.Type.SALES_ORDER,
id: soId,
values: { memo: 'Updated via script' }
});
This avoids loading the full record and improves speed dramatically.
βοΈ 7. Batch Operations
When working with large datasets (1000+ records):
- Split your dataset into chunks.
- Process each chunk in a Map/Reduce or Scheduled script cycle.
- Store progress using a custom record or script parameter.
This ensures scalability without governance issues.
βοΈ 8. Use Try/Catch and Error Logging
Always wrap major operations to handle partial failures gracefully.
Example:
try {
// perform operation
} catch (e) {
log.error('Error processing record', e.message);
}
You can also send custom notifications for repeated errors using email alerts or integration logs.
βοΈ 9. Use Async Processing for Long Tasks
For heavy processes (PDF generation, API calls, etc.), use:
- Map/Reduce for async jobs
- Scheduled Script with Queued Status (NetSuite queues automatically if another instance is running)
- Promise-like chaining using
Promise.resolve()
in SuiteScript 2.1 for better async control (client-side only)
βοΈ 10. Test in Sandbox with Different Data Volumes
Test your script:
- With 10, 100, and 1000 records to see performance scaling.
- With multiple users running similar processes to detect concurrency issues.
- Using debug logging off to simulate production speed.
π‘ Summary: Quick Checklist
β
Choose the correct script type
β
Use search.runPaged()
and batch processing
β
Monitor remaining governance
β
Use caching and lookupFields
β
Reduce unnecessary logs
β
Avoid record loads in loops
β
Handle errors gracefully
β
Test with realistic data
Leave a Reply