Skip to content

Latest commit

 

History

History
125 lines (92 loc) · 3.07 KB

File metadata and controls

125 lines (92 loc) · 3.07 KB

Benchmarks

Performance comparison between Rikta, NestJS, and Fastify.

🚀 Quick Start

npm install
npm run bench

📊 Available Benchmarks

Startup Time (npm run bench:startup)

Measures framework initialization time from module import to server ready.

npm run bench:startup

Request Overhead (npm run bench:requests)

Measures single-request latency with no concurrent load.

npm run bench:requests

Load Testing (npm run bench:autocannon)

High-concurrency throughput testing using Autocannon.

npm run bench:autocannon

🎯 Results Summary

Metric Rikta vs NestJS Rikta vs Fastify
Startup 🟢 -43% faster 🟢 -13% faster
Throughput 🟢 +9% faster 🟡 ~equivalent
Latency 🟢 ~40% faster 🟡 ~2-5% overhead

Key Takeaway: Rikta is ~40% faster than NestJS and adds minimal overhead (~2-5%) over vanilla Fastify. This is expected since Rikta uses Fastify as its HTTP engine.

See RESULTS.md for detailed results.

🔧 Test Configuration

Rikta (Optimized)

const app = await Rikta.create({
  port: 3001,
  silent: true,   // No console output
  logger: false   // No Fastify logging
});

NestJS

const app = await NestFactory.create(
  AppModule,
  new FastifyAdapter({ logger: false })
);

Fastify (Baseline)

const app = Fastify({ logger: false });

📁 Structure

benchmarks/
├── fixtures/
│   ├── fastify-fixture.ts    # Pure Fastify server
│   ├── nestjs-fixture.ts     # NestJS server
│   └── rikta-fixture.ts      # Rikta server
├── startup.bench.ts          # Startup time benchmark
├── request-overhead.bench.ts # Request latency benchmark
├── autocannon.bench.ts       # Load testing
├── RESULTS.md                # Detailed results
└── QUICK-SUMMARY.md          # Summary table

🧪 Methodology

Startup Benchmark

  1. Fork child process per framework
  2. Measure time from process start to "ready" message
  3. Run 5 iterations, take median
  4. Fresh process each iteration

Request Overhead

  1. Start all frameworks (different ports)
  2. Warm up with 10 requests each
  3. Measure 100 sequential requests
  4. Calculate median latency

Load Testing

  1. Concurrent connections: 10-100
  2. Duration: 10 seconds
  3. Measure requests/second and latency percentiles

💡 Tips

For Best Results

  • Run on Linux for consistent timing
  • Close other applications
  • Run multiple times, compare medians
  • Use silent: true and logger: false

Interpreting Results

  • Startup: Lower is better. Important for serverless/cold starts.
  • Request Latency: Lower is better. Measures framework overhead.
  • Throughput: Higher is better. Measures sustained load capacity.

📚 Related Documentation