Express FastAPI Performance Test

FastAPI vs Express: Performance Benchmark with Artillery

Gallery image 1
Gallery image 2
Gallery image 3

Express FastAPI Performance Test

Artillery load tests across FastAPI (sync), FastAPI (async), and Express, with capped CPU/RAM

This repository compares three API implementations (FastAPI synchronous, FastAPI asynchronous, and Express) against a shared PostgreSQL schema. Docker Compose profiles run one backend at a time; each service is constrained (as documented) to half a CPU core and 512 MB RAM so runs are more comparable. Artillery YAML suites cover read-heavy, write-heavy, spike, stress, soak, and breakpoint read/write patterns, with scripts to produce JSON and HTML reports.

When it is useful

You want repeatable numbers for talks, internal RFCs, or classroom exercises, not a single “winner” claim for every workload. The README also links a Looker Studio dashboard and ships example Artillery outputs for reference.

What you can do

  • Start a profile (fastapi-sync, fastapi-async, or express) and run run_artillery_test.sh with a scenario name from the docs.
  • Sweep all profiles and scenarios with run_all_artillery_tests.sh when you need a full matrix.
  • Inspect generated reports locally or compare against the bundled documentation snapshots; the README also points to an example Looker Studio dashboard at https://lookerstudio.google.com/reporting/270e16b6-8831-41d5-aed0-4d2352b39218/page/pqwOF.

Limits

  • Results bind tightly to hardware, Docker limits, DB tuning, and Artillery config; your production stack will differ.
  • Fairness still depends on equivalent app logic, connection pooling, and warm-up; interpret deltas carefully.
  • This project benchmarks; it does not choose your framework for you without your own product constraints.

You might also like

Explore All Blogs