A powerful and elegant benchmarking tool for JavaScript and TypeScript.
- π Zero-config benchmarking - Just point it at your functions
- π Statistical accuracy with multiple cycles and margin of error calculation
- π§ TypeScript support - Transpiles
.ts
files on-the-fly - β‘ Multiple export patterns - Detects default, named, and benchmark exports
- ποΈ Memory management - Advanced garbage collection strategies
- π Export results - JSON output for further analysis
- π¦ Library + CLI - Use programmatically or via command line
npm install --global @gud/bench
# or, for local projects
npm install --save-dev @gud/bench
The easiest way to benchmark any JavaScript files is using the CLI:
# Benchmark a single file
bench --files my-functions.js
# Benchmark multiple files
bench --files file1.js file2.js file3.js
# Customize the benchmark parameters
bench --files my-functions.js --runs 50000 --cycles 3
# Export results to JSON
bench --files my-functions.js --export true
The CLI will automatically detect and benchmark:
- Default exports (
export default function
) - Named function exports (
export function myFunction()
) - Specific benchmark functions (
export { benchmark }
orexport { test }
)
Example function file:
// sort-algorithms.js
const data = Array.from({ length: 100 }, () => Math.floor(Math.random() * 1000));
export function quickSort(arr = data) {
if (arr.length <= 1) return arr;
const pivot = arr[arr.length - 1];
const left = [], right = [];
for (let i = 0; i < arr.length - 1; i++) {
if (arr[i] < pivot) left.push(arr[i]);
else right.push(arr[i]);
}
return [...quickSort(left), pivot, ...quickSort(right)];
}
export function bubbleSort(arr = data) {
const result = [...arr];
for (let i = 0; i < result.length; i++) {
for (let j = 0; j < result.length - 1; j++) {
if (result[j] > result[j + 1]) {
[result[j], result[j + 1]] = [result[j + 1], result[j]];
}
}
}
return result;
}
Output:
β¦ π Loading 1 files for benchmarking...
β¦ Benchmark: Preheating 2 tests 1000 times each...
βΎ Benchmark
β¦ Running 1 cycle of 2 tests 100000 times each...
ββββββββββββββββββββββββββββββββββββ¬ββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββ¬ββββββββββ
β (index) β Runs β Total Time (ms) β AVG Time (ms) β Ops/Sec β Β± (%) β
ββββββββββββββββββββββββββββββββββββΌββββββββββββΌββββββββββββββββββΌββββββββββββββββΌββββββββββββΌββββββββββ€
β 1 - sort-algorithms#quickSort π β '100,000' β '441.9717' β '0.004420' β '226,259' β '0.68%' β
β 2 - sort-algorithms#bubbleSort β '100,000' β '1,126.0465' β '0.011260' β '88,806' β '0.06%' β
ββββββββββββββββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββ΄ββββββββββ
βΉ Total time: 1,568.018178 ms
Create sophisticated benchmarks with the programmatic API:
import { benchmark } from '@gud/bench';
// Create a new benchmark suite
const bench = benchmark('String Concatenation Benchmark');
// Add test functions
bench
.test('Template literals', () => {
const name = 'World';
return `Hello ${name}!`;
})
.test('String concatenation', () => {
const name = 'World';
return 'Hello ' + name + '!';
})
.test('Array join', () => {
const name = 'World';
return ['Hello', name, '!'].join(' ');
});
// Run the benchmark
await bench.run(100000, {
cycles: 5,
verbosity: 2,
gcStrategy: 'per-cycle',
});
// Export results
bench.exportToJson('./results/string-concat.json');
Memory Management:
// Fine-tune garbage collection for accurate results
await bench.run(100000, {
gcStrategy: 'periodic', // Force GC periodically
gcInterval: 1000, // Every 1000 iterations
cycles: 3 // Multiple cycles for accuracy
});
Statistical Analysis:
// Get detailed statistics
await bench.run(50000, {
cycles: 10, // More cycles = better accuracy
verbosity: 2 // Show detailed timing info
});
TypeScript Support:
// Works seamlessly with TypeScript files
bench --files "src/**/*.ts" --runs 10000
Creates a new benchmark suite.
Parameters:
name
(optional) - Name for the benchmark suite
Returns: Benchmark
instance
test(name: string, fn: Function)
- Add a test functionrun(iterations: number, options?: RunOptions)
- Execute benchmarkpreheat(iterations: number, options?)
- Warm up before benchmarkingexportToJson(filePath: string)
- Export results to JSONprintResults()
- Display formatted results table
interface RunOptions {
cycles?: number; // Test cycles (default: 1)
coolDown?: number; // MS between runs
verbosity?: 0 | 1 | 2; // Output level (default: 1)
gcStrategy?: 'never' | 'per-cycle' | 'per-test' | 'periodic';
gcInterval?: number; // For periodic GC (default: 1000)
}
'never'
- No forced GC (fastest, but memory pressure may affect results)'per-cycle'
- GC once per cycle (good balance)'per-test'
- GC after each test completes all iterations'periodic'
- GC every N iterations (default, configurable viagcInterval
)
Note: The CLI automatically restarts with the --expose-gc
flag when needed for garbage collection. To disable this behavior, set the environment variable BENCH_NO_EXPOSE_GC=true
.