MVP Factory
Connection pooling for serverless mobile backends
ai startup development

Connection pooling for serverless mobile backends

KW
Krystian Wiewiór · · 5 min read

Meta description: Compare PgBouncer, Supabase pooler, and Neon proxy for serverless PostgreSQL — with cold-start latency benchmarks under real mobile traffic spikes.

Tags: backend, architecture, cloud, mobile, api

TL;DR

Serverless functions and PostgreSQL are a notoriously bad pairing without connection pooling. I benchmarked PgBouncer (transaction mode), Supabase’s built-in pooler (Supavisor), and Neon’s connection proxy under simulated mobile burst traffic — 500 concurrent cold-start connections hitting a single Postgres instance. PgBouncer in transaction mode still wins on raw p99 latency. Neon’s proxy handles cold starts most gracefully. Supabase lands in the middle but requires the least operational work. Your pick depends on what hurts most: latency, ops burden, or cold-start failures.

The problem: mobile traffic is bursty by nature

Most teams get serverless database connections wrong the same way: they benchmark steady-state throughput and call it a day. Mobile backends don’t operate in steady state.

Think about real burst patterns. A push notification goes out to 200K users — 15% open within 90 seconds. A health or productivity app like HealthyDesk sends break reminders to developers throughout the day, and those reminders cluster around common schedules, generating synchronized API hits. Morning commute spikes hammer your backend between 7:45 and 8:15 AM local time across each timezone.

Each serverless function invocation spins up, opens a connection to Postgres, runs a query, and tears down. Without pooling, a 500-user burst means 500 simultaneous pg_connect() calls against a database with a default max_connections of 100.

The result? FATAL: too many connections errors, cascading retries, and angry users.

The contenders

FeaturePgBouncer (Transaction)Supabase Pooler (Supavisor)Neon Proxy
ArchitectureStandalone proxy, self-hostedManaged Elixir-based poolerBuilt into Neon’s serverless driver
Pooling modesSession, transaction, statementTransaction (default)Transparent, connection-level
Max downstream connectionsConfigurable (typically 10K+)~1,500 per project (paid)Effectively unlimited
Cold-start awarenessNone — you manage instancesWarm pooler, always-onProxy stays warm; compute cold-starts separately
Prepared statementsNot in transaction modeSupported via named poolerSupported natively
Operational burdenHigh — you run itZeroZero

Benchmark setup

I ran these tests on Cloud Run (0 min instances, forced cold starts) hitting a 4-vCPU Postgres 15 instance on GCP. The workload simulated a mobile notification burst: 500 concurrent functions, each executing a simple indexed SELECT, single row return.

p50 / p95 / p99 latency (ms) — 500 concurrent cold-start connections

Poolerp50p95p99Connection Errors
No pooler145890timeout38% error rate
PgBouncer (txn)23671120%
Supabase Pooler31891580.4%
Neon Proxy38781340%

No surprises at the top. PgBouncer dominates on raw latency because it’s a lightweight C process sitting millimeters from Postgres with zero abstraction overhead. The more interesting comparison is between Neon and Supabase: Neon’s proxy produced zero errors with a tighter p95-to-p99 gap, which suggests better queuing behavior under pressure. Supabase’s 0.4% error rate sounds small until you multiply it by a few thousand daily bursts.

Where each solution wins

PgBouncer: you get what you pay for (in ops time)

In my experience, PgBouncer in transaction mode is still the fastest option when you control your infrastructure. It adds roughly 1-2ms of overhead per query. But you’re running another stateful service that needs monitoring, failover, and configuration management. If you have dedicated platform engineers, fine. If you’re a three-person startup shipping a mobile app, this is a liability you don’t need.

Supabase pooler: swap your connection string and move on

Supabase’s Supavisor-based pooler requires zero configuration. You swap your connection string and it works. That 0.4% error rate under extreme burst load is worth knowing about, but under normal production traffic (sub-200 concurrent), I measured zero errors across 48-hour test runs. If you’re already on Supabase, just use it.

Neon proxy: purpose-built for serverless cold starts

Neon’s architecture separates compute from storage, and their proxy was designed specifically for the serverless case. The proxy stays warm even when compute scales to zero. Your first query after a cold start pays a compute wake penalty (~300-500ms) but never a connection establishment penalty. For edge workers on Cloudflare or Vercel, where every function is effectively a cold start, this is a real advantage.

// Neon serverless driver -- no connection management needed
import { neon } from '@neondatabase/serverless';

const sql = neon(process.env.DATABASE_URL);

export default async function handler(req: Request) {
  // Each invocation gets a pooled connection transparently
  const rows = await sql`SELECT * FROM users WHERE id = ${userId}`;
  return Response.json(rows[0]);
}

So which one?

If you self-host and have platform engineering capacity, PgBouncer in transaction mode gives you the lowest latency and most control. Set default_pool_size to your Postgres max_connections / number_of_pools and monitor cl_waiting metrics.

If you’re on Supabase and want to ship, use the built-in pooler via port 6543 and stop thinking about it. It handles real-world mobile burst patterns without configuration.

If you’re deploying to edge functions or need aggressive scale-to-zero, Neon’s serverless driver eliminates connection pooling as a concern entirely. Accept the compute cold-start cost and optimize with their fetchConnectionCache option.

The right pooling strategy is the one that matches your team’s operational capacity. And please — benchmark with your actual traffic patterns. Synthetic steady-state tests will lie to you.


Share: Twitter LinkedIn