MVP Factory
ai startup development

SQLite on the server: the single-node architecture handling 100K req/s

KW
Krystian Wiewiór · · 6 min read

Meta description: SQLite replaces PostgreSQL for indie SaaS with Litestream replication, WAL tuning, and single-writer concurrency. Benchmarks, boundaries, and migration paths included.

Tags: backend, architecture, startup, saas, cloud


TL;DR

SQLite in WAL mode on a single Linux node handles the vast majority of indie SaaS workloads. 100K+ reads/second. Thousands of writes/second. Pair it with Litestream for continuous S3 replication, tune your connection handling in Ktor or Spring Boot to respect single-writer semantics, and you eliminate an entire class of operational complexity. This post covers when SQLite works, when it doesn’t, and how to build the escape hatch before you need it.


Why reconsider SQLite now

Every startup I advise starts with the same question: “Which database?” The default answer has been PostgreSQL for a decade. But most teams get this wrong. They optimize for scale they don’t have while paying the operational cost today.

A managed PostgreSQL instance runs $50-200/month minimum. It needs connection pooling, migration tooling, backup configuration, and monitoring. For a solo founder or small team shipping an MVP, that’s cognitive load that directly competes with building product. Good infrastructure engineers are hard to find right now, which makes this trade-off even sharper.

SQLite eliminates the network hop, the connection pool, and the separate backup system. Your database is a file. Here’s the architecture that makes this production-ready.

WAL mode tuning on Linux

SQLite’s Write-Ahead Logging mode is mandatory for server use. Without it, readers block writers. With it, you get concurrent reads during writes.

Key PRAGMA settings for a server deployment on ext4 or btrfs:

PRAGMA journal_mode = WAL;
PRAGMA busy_timeout = 5000;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -64000;  -- 64MB
PRAGMA foreign_keys = ON;
PRAGMA wal_autocheckpoint = 1000;

synchronous = NORMAL instead of FULL is the tuning decision that matters most. On ext4, this gives you durability against application crashes while accepting a theoretical risk during OS-level crashes (in practice, I’ve never seen it bite anyone). On btrfs, copy-on-write semantics add another safety layer, but watch for write amplification. Mount with nodatacow on the database directory if write throughput matters.

Litestream: continuous replication to S3

Litestream streams WAL changes to S3-compatible storage continuously, typically with sub-second replication lag. This gives you point-in-time recovery without cron-based sqlite3 .backup scripts.

A minimal Litestream config:

dbs:
  - path: /data/app.db
    replicas:
      - type: s3
        bucket: my-saas-backups
        path: replica
        retention: 72h

Recovery is a single command: litestream restore -o /data/app.db s3://my-saas-backups/replica. This restore-and-boot pattern means your entire disaster recovery plan fits in a Dockerfile. That’s not a simplification I’m making for the blog post. It literally fits in a Dockerfile.

Single-writer concurrency: Ktor and Spring Boot

SQLite allows exactly one writer at a time. This is the constraint you design around.

FrameworkStrategyImplementation
KtorSingle Connection object with mutexUse kotlinx.coroutines.sync.Mutex wrapping all write transactions
Spring BootHikariCP with maximumPoolSize=1 for writesSeparate read pool (size 4-8) + write pool (size 1)
Any frameworkSerialize writes at application layerUse a dedicated write dispatcher/queue

In Ktor with a coroutine-based approach:

val writeMutex = Mutex()

suspend fun executeWrite(block: (Connection) -> Unit) {
    writeMutex.withLock {
        dataSource.connection.use { block(it) }
    }
}

The write mutex adds microseconds of overhead. Skipping the network round-trip to PostgreSQL saves milliseconds. That math doesn’t require a benchmark to settle.

Benchmarks: SQLite vs PostgreSQL for SaaS patterns

Typical SaaS workloads are 80-95% reads. Here’s where things stand on a standard 4-core VPS:

MetricSQLite (WAL)PostgreSQL (local)PostgreSQL (managed)
Simple reads/sec~100K+~50K+~20-40K (network bound)
Writes/sec (sequential)~5K-10K~10K-30K~5-15K
Concurrent writers1HundredsHundreds
P99 read latency<0.1ms~0.5ms1-5ms
Operational cost$0$0 (self-managed)$50-200/mo
Backup complexityLitestream (configure once)pg_dump / WAL-EProvider-managed

SQLite wins on reads because there’s no protocol overhead. It’s a function call into the same process. PostgreSQL wins on concurrent write throughput, which matters at scale but rarely at launch.

When SQLite stops working

Be honest about the boundaries.

Multiple application servers. SQLite lives on one disk. The moment you need horizontal scaling, you need a networked database. Full stop.

Write-heavy workloads exceeding ~5K writes/sec. The single-writer lock becomes your bottleneck, and no amount of application-level cleverness fixes that.

Complex reporting alongside OLTP. Heavy analytical queries will block your single writer during WAL checkpoints. If your product needs both, you’ll feel the pain quickly.

The migration escape hatch is straightforward: keep your SQL standard-compliant from day one. Avoid SQLite-specific functions. Use an ORM like Exposed (Kotlin) or JDBI that abstracts the connection layer. When the day comes, and for many indie SaaS products it never does, swapping to PostgreSQL is a connection string change plus a data migration script.

As someone who spends long hours at a desk architecting these systems (HealthyDesk keeps me from fusing with my chair by reminding me to actually move between design sessions), I can tell you: the simplest architecture that meets your requirements is the right starting point. Not the most future-proof one. The simplest one.

What to actually do

Start with SQLite in WAL mode and Litestream for any new indie SaaS project. You eliminate operational overhead, reduce database costs to zero, and get sub-millisecond read latency out of the box. Build the PostgreSQL escape hatch by keeping your SQL standard-compliant, but don’t build PostgreSQL support you don’t need yet.

Serialize writes explicitly at the application layer. Whether you use a coroutine mutex in Ktor or a single-connection write pool in Spring Boot, make the single-writer constraint visible in your code. Don’t rely on SQLite’s busy timeout to silently handle contention for you. That’s a recipe for mysterious production latency spikes.

Set your migration trigger at concrete numbers, not hypothetical ones. Move to PostgreSQL when you need multiple application servers, when write throughput consistently exceeds 5K/sec, or when analytical queries start degrading your user-facing performance. Until you hit those thresholds, you’re just paying for complexity you don’t need.


Share: Twitter LinkedIn