MVP Factory
ai automation development devops

I Built an AI Content Pipeline That Publishes a Blog Post in 83 Seconds

KW
Krystian Wiewiór · · 5 min read

There’s a guy on LinkedIn selling a course about AI automation. Brain vault, n8n workflows, Claude Code templates. A free workshop, a 100-day AI plan, the whole package. He claims you can automate 90% of your tasks.

I don’t sell courses. I built the thing he’s teaching about. And it runs in production right now.

What it actually does

Every day, my system reads 84 RSS feeds. It picks the most relevant topic for my audience. Then three AI agents process it in sequence, one after another, each with a specific job.

The researcher reads the source material and pulls out key facts. That takes about 10 seconds. The blog writer takes those facts and writes a full article with structure, examples, and a clear point of view. That’s 43 seconds. Then the reviewer checks the draft for quality, accuracy, and tone. Another 30 seconds.

Total time from RSS feed to published blog post with quality review: 83 seconds.

These aren’t demo numbers. I pulled them from my production logs yesterday.

The stack

Here’s what’s NOT in my stack: n8n, Zapier, Make, LangChain, or any paid automation platform.

The entire system is Node.js. About 2000 lines of code across two services. The only external dependency that costs money is a Claude subscription, same one I use for coding anyway.

The auto-publisher runs in a Docker container on Coolify. It handles scheduling, RSS parsing, content pipeline, and publishing to multiple platforms. Dev.to, LinkedIn drafts, Twitter drafts, and my own blog via Git commits.

But here’s the part that took the most engineering: the auto-publisher can’t run Claude CLI directly. It’s in a container. Claude CLI needs to be installed on the host machine.

The proxy problem

This is where most people would reach for n8n or some workflow tool. “Just connect the API!” Sure, but I’m using Claude CLI with my subscription, not the API. The CLI is included in the subscription. The API costs per token.

So I built claude-proxy. It’s a tiny HTTP service that sits on the VPS, outside Docker, right next to the Claude CLI binary. Any container on the server can send it a prompt over HTTP and get text back.

The proxy handles authentication with bearer tokens, runs a sequential queue so Claude processes one request at a time, and tracks metrics for every call. It’s about 150 lines of code. One dependency: dotenv.

The architecture looks like this. The auto-publisher container sends an HTTP POST to the proxy. The proxy spawns Claude CLI, pipes the prompt through stdin, waits for the response, and sends it back as JSON. Simple.

Auto-publisher (Docker) --> HTTP --> claude-proxy (PM2) --> Claude CLI

15 agents, not just 3

The content pipeline uses 3 agents (researcher, writer, reviewer), but the full MVP Factory system behind my studio runs 15 specialized agents. Product owner, UX researcher, system architect, security auditor, code reviewer, DevOps engineer, and more. Each one has a defined role, specific instructions, and a scope of responsibility.

There’s also 5 advisory personalities for brainstorming sessions. They argue with each other about product decisions before any code gets written.

None of this requires n8n. It’s just prompts, roles, and a clear pipeline order. The orchestration is a few JavaScript functions.

What the course sellers don’t show you

They show you a workflow diagram in n8n with 12 connected nodes. It looks impressive in a screenshot. But they don’t show you the Docker networking issues when your container can’t reach the host service. They don’t show you the iptables rules you need. They don’t show you what happens when Claude CLI refuses to run as root and you need to create a dedicated user.

I spent a full session debugging why my container couldn’t reach the proxy. The container was on a 10.0.1.x bridge network. The proxy was bound to a WireGuard VPN IP. Docker networks are isolated by design. You have to bind to 0.0.0.0, add firewall rules for internal subnets, and make sure the service is actually reachable from the container’s gateway IP.

That’s real engineering. Not a Notion template.

The numbers

Here are actual metrics from my production system.

Researcher agent: 9.8 seconds, 836 characters of structured research output.

Blog writer agent: 42.9 seconds, 6584 characters. That’s a full article with introduction, sections, examples, and conclusion.

Reviewer agent: 30.3 seconds, 2292 characters of feedback on structure, accuracy, and suggestions.

The proxy queue processes these sequentially. Peak memory usage stays flat because there’s only one Claude process at a time. The queue cap is 20 requests, which is way more than I need but prevents runaway scenarios.

Why not just use the API?

Cost. The Claude API charges per token. My CLI subscription is a flat monthly fee. For the volume of content I generate, the CLI route is significantly cheaper. The proxy adds maybe 50ms of overhead per request. That’s nothing compared to the 10-40 seconds Claude takes to think.

The tradeoff is that the CLI is single-threaded on my server. One request at a time. But for a content pipeline that runs twice a day, that’s perfectly fine. I don’t need parallel processing. I need reliability and low cost.

What you actually need to build this

A VPS with Claude CLI installed. Node.js. About a weekend of coding if you know what you’re doing. No courses, no templates, no 100-day plans.

The hardest parts aren’t the AI. They’re the boring infrastructure things. Docker networking, process management with PM2, systemd socket configuration, SSH hardening, firewall rules. The AI part is actually the simplest piece: send prompt, get text, publish text.

If you’re a developer thinking about automating content, skip the courses. Read the Docker docs, understand your network topology, and write a few hundred lines of Node.js. That’s honestly all there is to it.


Share: Twitter LinkedIn