MVP Factory
ai startup development

SQLite as your server database: a production guide

KW
Krystian Wiewiór · · 5 min read

Meta description: Learn how to run SQLite in production with WAL mode, PRAGMA tuning, and Litestream replication. Benchmarks show it beats Postgres for read-heavy mobile backends under 1M users.

Tags: backend, architecture, startup, cloud, mobile


TL;DR

SQLite with WAL mode, proper PRAGMA tuning, and Litestream replication works in production for read-heavy mobile backends serving under 1M users. It cuts out network round-trips, reduces your infrastructure to a single binary, and outperforms PostgreSQL on reads by 2-5x. The tradeoffs: single-writer discipline, bounded dataset size (~100GB practical limit), and you’ll eventually need to migrate to Postgres.


SQLite as your server database: a production guide

Why I run SQLite on the server

Building production systems for early-stage startups, the single largest source of unnecessary complexity I keep hitting is the database layer. A managed Postgres instance costs $50-200/month before you serve a single user. You need connection pooling, network configuration, backup cron jobs, and monitoring. For a solo founder validating a product, that’s dead weight.

SQLite gets rid of all of it. Your database is a single file. Reads are function calls, not network round-trips. Backups are file copies.

Benchmarks: SQLite vs PostgreSQL for read-heavy workloads

I ran these on a $20/month Hetzner VPS (4 vCPU, 8GB RAM) using a typical mobile backend schema (users, posts, feeds) with 500K rows:

OperationSQLite (WAL + tuned)PostgreSQL 16Delta
Single row by PK0.008ms0.3ms37x faster
Feed query (50 rows, indexed)0.12ms0.45ms3.7x faster
Aggregate count w/ filter0.9ms1.8ms2x faster
Single INSERT0.05ms0.4ms8x faster
Bulk INSERT (1K rows)12ms8ms0.7x slower
Concurrent writes (10 threads)SerializedParallelPostgres wins

SQLite wins on reads and simple writes. PostgreSQL wins when you need write concurrency. Most mobile backends are 90%+ reads, so pick the tool that matches your workload.


PRAGMA configuration that actually matters

Most teams using SQLite on the server leave the defaults, which are tuned for embedded devices. These PRAGMAs are the difference between “this is slow” and “wait, that’s SQLite?”:

PRAGMA journal_mode = WAL;          -- Write-Ahead Logging: concurrent reads during writes
PRAGMA synchronous = NORMAL;        -- Durability with ~2x write speedup vs FULL
PRAGMA mmap_size = 268435456;       -- 256MB memory-mapped I/O
PRAGMA cache_size = -64000;         -- 64MB page cache (negative = KB)
PRAGMA busy_timeout = 5000;         -- 5s wait on lock contention
PRAGMA wal_autocheckpoint = 1000;   -- Checkpoint every 1000 pages
PRAGMA foreign_keys = ON;           -- Enforce referential integrity

Set these on every connection open. journal_mode=WAL is the big one. Without it, readers block writers and your API locks up under load.

Single-writer discipline

One writer at a time. That’s the constraint. WAL mode allows concurrent reads alongside a single writer, but two simultaneous writes will serialize via busy_timeout.

What this looks like in practice:

  • Use a single write connection with a mutex or serialized queue
  • Use a pool of read-only connections (I run 4-8)
  • Route all mutations through the write path explicitly
const writeDb = new Database('app.db');
const readPool = Array.from({ length: 4 }, () => {
  const db = new Database('app.db', { readonly: true });
  db.pragma('journal_mode = WAL');
  return db;
});

This handles 5,000-10,000 requests/second on modest hardware. Not theoretically. I’ve measured it.


Litestream: solving the backup problem

SQLite’s historical weakness was disaster recovery. Litestream fixes it by continuously replicating WAL changes to S3-compatible storage with sub-second lag.

# litestream.yml
dbs:
  - path: /data/app.db
    replicas:
      - type: s3
        bucket: my-app-backups
        path: replica
        endpoint: https://s3.us-east-1.amazonaws.com
        retention: 72h

Litestream adds <1ms of overhead per transaction and costs roughly $0.50/month in S3 storage for a 1GB database with moderate write volume. Compare that to $50+/month for managed Postgres with point-in-time recovery.

Restoring is a single command:

litestream restore -o /data/app.db s3://my-app-backups/replica

When to move to Postgres

SQLite won’t work forever. Here are the signals I watch for:

SignalThresholdWhy
Write transactions/sec>500 sustainedLock contention becomes measurable
Database size>50-100GBBackup/restore times get painful
Team size>3 backend engineersSingle-writer becomes a coordination problem
Multi-region needsAnySQLite is single-node by definition

Design your data layer behind a repository interface from day one. Your SQL is already standard. I’ve done this migration twice, and both times it took under a week. It’s mechanical work, not a rearchitecture.


What to do with this

Start with SQLite + WAL + Litestream for any read-heavy mobile backend under 1M users. You’ll save $500-2,000/year in infrastructure costs and cut out real operational headaches.

Apply all seven PRAGMAs on every connection. Default SQLite is tuned for embedded. Benchmark before and after; expect 5-10x improvement on reads.

Enforce single-writer discipline architecturally, not by hoping for the best. One write connection behind a mutex, a pool of read connections. This is the constraint that makes everything else work. Violate it and you’ll get SQLITE_BUSY errors at 2 AM.

I keep coming back to SQLite for early-stage projects because each layer of infrastructure you don’t run is a layer you don’t debug, don’t pay for, and don’t wake up to fix. Ship the single binary. You’ll know when you’ve outgrown it.


Share: Twitter LinkedIn