Benchmarks
Lighthouse scores, methodology, and how to reproduce them.
Targets
Every release of astro-ignite is gated by Lighthouse CI on the scaffolded playground. We measure the home page, blog index, a representative blog post, and a project case study, all in mobile config.
| Category | Hard floor | Soft target |
|---|---|---|
| Performance | 95 | 100 |
| Accessibility | 95 | 100 |
| Best Practices | 95 | 100 |
| SEO | 95 | 100 |
A PR that drops any median below 95 fails CI. A PR that drops below 100 prints a warning but does not block.
Methodology
- Build the production site with the standard
astro build. - Serve it statically via Lighthouse CI’s bundled server (no caching tricks beyond what a normal CDN would do).
- Run Lighthouse 3 times per URL, take the median to absorb single-run variance.
- Mobile config with simulated 4G throttling and 4× CPU slowdown (Lighthouse defaults).
- Run on Ubuntu in GitHub Actions on the
ubuntu-latestimage.
The full Lighthouse JSON for each run is uploaded as an artifact on every CI run, so anyone can audit the numbers.
Reproducing locally
pnpm scaffold:test
This wipes apps/playground/, scaffolds it from the CLI with --yes, installs deps, runs astro build, and runs Lighthouse against the static output. You’ll see the same numbers CI sees.
What’s tracked
- Lighthouse mobile scores (4 categories × 4 routes)
- Total bundle size (JS + CSS + fonts) for the homepage cold load
- LCP, FCP, TBT, CLS individual numbers
- Build time
Trends published per release in the GitHub Releases page once we have a baseline.