🔹 Introduction
Map/Reduce scripts are NetSuite’s scalability workhorse. Once you’ve mastered the basics, you can apply advanced techniques to:
- Process millions of records safely.
- Handle errors gracefully without breaking jobs.
- Optimize for governance and performance.
- Share data between stages.
- Aggregate and export results.
🔹 1. Chunking & Governance-Friendly Design
Instead of processing all results at once, design jobs to work in smaller batches.
// Example: Limit search results dynamically
const results = search.run().getRange({ start: 0, end: 100 }); // Process 100 at a time
đź’ˇ Tip: Always fetch records in chunks, not the entire dataset, to avoid governance spikes.
🔹 2. Passing Data from Map → Reduce
You can group map output by key so the reduce stage processes related values together.
const map = (context) => {
try {
const result = JSON.parse(context.value);
const customerId = result.values.entity.value;
// Key = customerId → all orders for that customer will be grouped
context.write({
key: customerId,
value: result.id // SO internal ID
});
} catch (e) {
log.error('Map Error', e.message);
}
};
const reduce = (context) => {
try {
// context.key = Customer ID
// context.values = array of Sales Orders for this customer
log.debug('Reduce Stage', `Customer ${context.key} has ${context.values.length} orders`);
} catch (e) {
log.error('Reduce Error', e.message);
}
};
đź’ˇ Use case: Summarizing orders, aggregating transactions, or consolidating line data.
🔹 3. Error Handling Without Stopping the Job
Wrap logic in try–catch inside map/reduce so one bad record doesn’t crash the script.
const map = (context) => {
try {
const result = JSON.parse(context.value);
if (!result.values.entity) throw new Error('Missing Customer');
context.write({ key: result.values.entity.value, value: result.id });
} catch (e) {
log.error('Map Error', `Record ${context.key || 'unknown'} failed: ${e.message}`);
}
};
đź’ˇ Best practice: Use summarize.mapSummary.errors
to review failed keys after the job.
🔹 4. Sharing Data Between Stages
You can store counters or messages in the summarize stage for reporting.
const summarize = (summary) => {
try {
let totalProcessed = 0;
summary.mapSummary.keys.iterator().each((key) => {
totalProcessed++;
return true;
});
log.debug('Job Completed', `Processed ${totalProcessed} customers`);
} catch (e) {
log.error('Summary Error', e.message);
}
};
đź’ˇ Use case: Build reports, send notifications, or trigger next scripts.
🔹 5. Parallelization & Performance
NetSuite may run map tasks in parallel. Keep map logic idempotent (safe to retry).
⚠️ Don’t rely on sequential order in map
→ design logic so if the same key runs twice, it won’t break data.
🔹 6. Writing Complex Objects to Reduce
You can pass JSON objects instead of strings.
context.write({
key: customerId,
value: JSON.stringify({
soId: result.id,
amount: result.values.total
})
});
Then parse in reduce
:
const reduce = (context) => {
const orders = context.values.map(v => JSON.parse(v));
const totalAmount = orders.reduce((sum, o) => sum + parseFloat(o.amount), 0);
log.debug('Customer Totals', `Customer ${context.key} total amount = ${totalAmount}`);
};
đź’ˇ Use case: Financial aggregation (e.g., total SO per customer).
🔹 7. Exporting Data (Files/Emails)
Use N/file
in summarize stage to export results.
define(['N/file'], (file) => {
const summarize = (summary) => {
try {
let csv = 'Customer,Total Orders\n';
summary.reduceSummary.keys.iterator().each((custId) => {
csv += `${custId},${summary.reduceSummary.getSummary(custId).keys.length}\n`;
return true;
});
const csvFile = file.create({
name: 'customer_totals.csv',
fileType: file.Type.CSV,
contents: csv
});
csvFile.folder = 123; // Replace with folder ID
const fileId = csvFile.save();
log.debug('File Saved', `CSV File ID: ${fileId}`);
} catch (e) {
log.error('File Export Error', e.message);
}
};
});
đź’ˇ Use case: Reports, exports for integrations, or downstream processing.
🔹 8. Governance Best Practices
- Use
record.submitFields()
in reduce (faster thanload+save
). - Always check
runtime.getCurrentScript().getRemainingUsage()
. - Split input with Saved Search filters if jobs are too large.
- Build retry-friendly logic in
map
andreduce
.
âś… Key Takeaway
Advanced Map/Reduce scripts let you:
- Handle large-scale data without governance issues.
- Group and aggregate records efficiently.
- Recover gracefully from errors.
- Export meaningful results.
Leave a Reply