Container image caching in GitHub Actions: 12 min to 90 sec
Meta description: BuildKit cache mounts, registry-backed layers, and multi-stage builds cut our Docker build times from 12 minutes to 90 seconds in GitHub Actions.
Tags: devops, cicd, docker, kubernetes, cloud
TL;DR
Ephemeral CI runners destroy your Docker layer cache on every run, turning a 30-second local build into a 12-minute ordeal. I fixed this with three changes: BuildKit cache mounts for package manager directories, registry-backed layer caching with --cache-to/--cache-from, and multi-stage builds that separate dependency layers from source code. This brought our monorepo Docker builds from 12 minutes to 90 seconds without blowing past the 10GB cache eviction limit.
The problem: ephemeral runners kill your cache
Most teams optimize their Dockerfiles for local builds and assume CI will behave the same way. It won’t. GitHub Actions runners are ephemeral. Every job starts with a cold Docker daemon. No layers, no build cache, nothing. That RUN npm install layer you cached locally? Rebuilt from scratch. Every single time.
I’ve worked with JVM, Node, and Python stacks, and Docker builds are consistently the worst time sink in CI pipelines. Here’s what ours looked like before and after:
| Build phase | Without caching | With full strategy |
|---|---|---|
| Base image pull | ~45s | ~5s (cached) |
| Dependency install | ~6min | ~15s (cache mount hit) |
| Compilation/build | ~4min | ~50s (layer cache hit) |
| Final image assembly | ~1min | ~20s |
| Total | ~12min | ~90s |
BuildKit cache mounts
BuildKit’s --mount=type=cache is wildly underused in CI. Unlike layer caching, cache mounts persist package manager directories across builds without baking them into image layers.
# syntax=docker/dockerfile:1
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --prefer-offline
COPY . .
RUN npm run build
For JVM projects, target ~/.gradle/caches or ~/.m2/repository instead. What makes this work: these mounts survive layer invalidation. Even when package.json changes, the npm cache directory still has most of your packages warm.
Registry-backed layer caching
The docker/build-push-action supports multiple cache backends. The registry backend stores cache layers alongside your images, so no local storage is needed.
- uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ghcr.io/org/app:latest
cache-from: type=registry,ref=ghcr.io/org/app:buildcache
cache-to: type=registry,ref=ghcr.io/org/app:buildcache,mode=max
mode=max matters. It exports cache for all stages, not just the final one. Without it, intermediate build stages lose their cache on the next run.
Why not the GitHub Actions cache backend?
GitHub’s built-in cache has a hard 10GB limit per repository. In a monorepo with multiple services, you’ll hit eviction within days. The registry backend doesn’t have that problem. Your container registry already handles large blobs.
| Cache backend | Size limit | Speed | Cross-branch | Monorepo-friendly |
|---|---|---|---|---|
| GitHub Actions cache | 10GB total | Fast (local) | Limited | No, shared eviction |
Registry (type=registry) | Registry limit | Moderate (network) | Yes | Yes, per-image refs |
Local (type=local) | Runner disk | Fastest | No | N/A, ephemeral |
Multi-stage dependency splitting
The goal is to structure your Dockerfile so dependency installation and source code compilation live in separate stages with distinct cache keys.
# Stage 1: Dependencies (changes rarely)
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
# Stage 2: Build (changes on every commit)
FROM deps AS builder
COPY . .
RUN npm run build
# Stage 3: Runtime (minimal image)
FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=deps /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
When only source code changes, Stage 1 is a full cache hit. The registry backend serves those cached layers in seconds instead of running a fresh npm ci.
Monorepo pattern: shared base layers
For monorepos, extract shared dependencies into a common base image and reference it across services:
# In your CI workflow
- name: Build base
uses: docker/build-push-action@v5
with:
file: docker/base.Dockerfile
cache-from: type=registry,ref=ghcr.io/org/base:buildcache
cache-to: type=registry,ref=ghcr.io/org/base:buildcache,mode=max
- name: Build service-a
uses: docker/build-push-action@v5
with:
build-args: BASE_IMAGE=ghcr.io/org/base:latest
cache-from: type=registry,ref=ghcr.io/org/service-a:buildcache
One warm cache serves the entire monorepo. No duplicated dependency installation across services.
What to do first
Add BuildKit cache mounts. Seriously, do this today. Changing one RUN line to use --mount=type=cache for your package manager directory gives you immediate wins with no infrastructure changes.
After that, switch to registry-backed caching if you’re in a monorepo. The 10GB eviction limit on GitHub’s built-in cache will bite you eventually, and registry caching works across branches.
Finally, split your dependency and build stages as aggressively as you can. Dependency layers should only change when lockfiles change, not on every commit. The more granular your stages, the higher your cache hit rate.
No single one of these gets you from 12 minutes to 90 seconds. But stack all three and Docker stops being the thing you wait on.