Skip to main content

Building an AI Blog Writer: From Topic to Published Post with n8n, Claude, and GitHub

Ryan Dahlberg
Ryan Dahlberg
February 9, 2026 15 min read
Share:
Building an AI Blog Writer: From Topic to Published Post with n8n, Claude, and GitHub

From Topic to Published Post in One Conversation

I paste a topic. An AI agent researches my 216 existing blog posts, finds related content, generates a complete draft with production-ready frontmatter, and commits it directly to GitHub on my approval. Cloudflare Pages auto-deploys. The entire post is live in minutes.

This is Component 1 of a five-part autonomous blog publishing system. When complete, I’ll paste a topic and get a fully-formed post with an animated SVG hero image, OG social card, optimized publish schedule, and deployment verification — all orchestrated by an n8n AI agent.

But first: the foundation. The system that turns ideas into structured, validated markdown committed to version control.


The Vision

I write a lot. 216+ blog posts covering everything from MCP servers to zero-trust security to infrastructure as fabric. Every post follows strict rules:

  • Zod-validated frontmatter with 8 category enums
  • 150-160 character descriptions (not 149, not 161)
  • Animated SVG hero images with specific paths
  • Author metadata, featured flags, tag arrays
  • Slug generation that matches filename patterns
  • First-person technical voice
  • No title repetition in the body

Writing is the creative part. Formatting, validating, and publishing? That’s automation territory.

The goal: A chat interface where I paste a topic → system researches existing posts → generates a complete draft → I review → system commits to GitHub → Cloudflare deploys automatically.

This post covers the draft generation and GitHub commit components. Future posts will add hero image generation, OG image creation, publish scheduling, and deployment verification.


The Existing Foundation

Before building this, I already had:

RAG Chatbot

  • 2,911 vectors in Qdrant Cloud from 216 blog posts
  • Voyage AI voyage-3 embeddings (1024 dimensions)
  • Claude 3.5 Haiku for conversational queries
  • Deployed at http://localhost:5678/webhook/blog-rag-chat/chat

Blog Infrastructure

  • Astro build with Zod-validated content collections
  • GitHub repo at ry-ops/blog
  • Cloudflare Pages auto-deployment on push to main
  • Docker Compose n8n-fabric stack (n8n + PostgreSQL + Redis + local Qdrant)

What was missing: A way to go from “write about X” to “production-ready post committed to GitHub.”


The Architecture

Eight nodes in a single n8n workflow:

graph TB
    ChatTrigger[1. Blog Writer Chat<br/>chatTrigger<br/>public, hosted]

    Agent[2. Blog Writer Agent<br/>agent<br/>research → generate → review → save]

    Haiku[3. Claude Haiku<br/>lmChatAnthropic<br/>Agent orchestration]

    Memory[4. Conversation Memory<br/>memoryBufferWindow<br/>20 messages]

    Search[5. search_blog_posts<br/>vectorStoreQdrant<br/>retrieve-as-tool]

    Embeddings[6. Voyage AI Embeddings<br/>embeddingsOpenAi<br/>voyage-3, 1024d]

    GenerateDraft[7. generate_draft<br/>toolCode<br/>Anthropic API via fetch]

    SaveGitHub[8. save_to_github<br/>toolCode<br/>GitHub Contents API]

    ChatTrigger -->|main| Agent
    Haiku -->|ai_languageModel| Agent
    Memory -->|ai_memory| Agent
    Search -->|ai_tool| Agent
    Embeddings -->|ai_embedding| Search
    GenerateDraft -->|ai_tool| Agent
    SaveGitHub -->|ai_tool| Agent

The flow:

  1. Chat Trigger receives the topic/URL/content
  2. Agent orchestrates the entire workflow via system prompt
  3. Claude Haiku powers the agent’s reasoning
  4. Conversation Memory maintains 20-message context
  5. search_blog_posts queries Qdrant to find related/duplicate posts
  6. Voyage AI Embeddings creates query vectors for search
  7. generate_draft calls Anthropic API to create the markdown
  8. save_to_github commits directly via GitHub Contents API

Why Not Just Use Claude?

You could paste a topic into Claude Desktop and ask it to write a blog post. But you’d get:

  • Generic markdown with no frontmatter validation
  • Made-up categories (Claude doesn’t know your schema)
  • Descriptions that are 120 or 200 characters (not the required 150-160)
  • Wrong hero image paths
  • No research into what you’ve already written
  • Manual copying to your blog repo
  • Manual commit + push
  • No memory of previous drafts

This system:

  • Researches first — queries your existing posts to avoid duplication and identify gaps
  • Validates everything — Zod schema, category enums, description length, slug format
  • Knows your voice — system prompt baked with writing style samples
  • Commits atomically — draft → review → GitHub via API → auto-deploy
  • Maintains context — 20-message memory for “make it more technical” or “add code examples”

The Key Insight: Two Claude Instances

Here’s what makes this work:

Claude instance #1 (the Agent) — Claude 3.5 Haiku orchestrating the workflow:

  • Decides when to search blog posts
  • Decides when to generate a draft
  • Decides when to save to GitHub
  • Handles conversational back-and-forth with me

Claude instance #2 (the Draft Generator) — Called via Anthropic API from the generate_draft Code Tool:

  • Receives topic + research summary + optional feedback
  • Gets a massive system prompt with my writing style, schema rules, examples
  • Generates production-ready markdown in one shot
  • No conversational overhead — single input → single output

Why separate them? The agent needs to be conversational and decisive. The draft generator needs to be deterministic and comprehensive. Mixing those concerns in one prompt creates flaky results.

The generate_draft tool acts as a bridge: the agent calls it, the tool hits the Anthropic API with a specialized prompt, returns formatted markdown.


Node Deep Dive

Node 5: search_blog_posts (Qdrant Retrieve)

Same pattern as the working RAG chatbot:

{
  "type": "vectorStoreQdrant",
  "mode": "retrieve",
  "collection": "ryops_blog",
  "topK": 10,
  "toolName": "search_blog_posts",
  "description": "Search existing blog posts to find related content or check for duplicates. Use this BEFORE generating any draft."
}

The agent always searches first. This prevents:

  • Duplicate posts on the same topic
  • Missing valuable cross-references
  • Ignoring existing deep-dives that should be linked

Example:

Me: “Write about building MCP servers” Agent: searches blog → finds 6 existing MCP server posts → generates draft with references to all 6

Node 7: generate_draft (Code Tool calling Anthropic API)

This is where the magic happens. When the agent calls generate_draft(topic, research_summary, feedback), the Code Tool:

  1. Builds the system prompt:

    • Ryan’s writing voice samples
    • Complete Zod schema from config.ts
    • All 8 category enums
    • Frontmatter template from POST_TEMPLATE.md
    • Hero path rules: /images/posts/{date}-{slug}-hero.svg
    • Slug generation rules: lowercase, hyphens, no special chars
    • Description length: 150-160 characters
    • Author: Ryan Dahlberg, /ryan-dahlberg-avatar.svg
    • Featured flag logic
    • No title repetition in body
  2. Calls Anthropic API via fetch():

const response = await fetch('https://api.anthropic.com/v1/messages', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'x-api-key': $env.ANTHROPIC_API_KEY,
    'anthropic-version': '2023-06-01'
  },
  body: JSON.stringify({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 4096,
    system: systemPrompt,
    messages: [{
      role: 'user',
      content: userPrompt
    }]
  })
});
  1. Parses and validates the response:

    • Extracts frontmatter and body
    • Generates slug from title
    • Validates against Zod schema
    • Fixes common formatting issues
    • Ensures hero paths match generated slug
  2. Stores draft in workflow static data for the save tool

  3. Returns formatted preview to agent:

Draft generated successfully!

Title: Building MCP Servers with TypeScript
Category: Engineering
Description: How to build Model Context Protocol servers with TypeScript, from setup to deployment. Covers protocol implementation, tool definitions, and integration patterns. (158 chars)
Slug: building-mcp-servers-with-typescript
Hero: /images/posts/2026-02-09-building-mcp-servers-with-typescript-hero.svg

Preview:
---
[frontmatter shown here]
---

[first 500 chars of body shown here]...

Reply "save" to commit to GitHub, or provide feedback for revisions.

Node 8: save_to_github (Code Tool calling GitHub API)

When I reply “save” or “looks good”, the agent calls save_to_github():

  1. Retrieves the draft from workflow static data
  2. Base64 encodes the markdown content
  3. PUTs to GitHub Contents API:
const filename = `${date}-${slug}.md`;
const path = `src/content/posts/${filename}`;

const response = await fetch(`https://api.github.com/repos/ry-ops/blog/contents/${path}`, {
  method: 'PUT',
  headers: {
    'Authorization': `Bearer ${$env.GITHUB_TOKEN}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    message: `feat(blog): add post - ${title}\n\nGenerated via AI Blog Writer Agent\n\nCo-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>`,
    content: base64Content,
    branch: 'main'
  })
});
  1. Returns confirmation with links:
✓ Committed to GitHub!

Commit: https://github.com/ry-ops/blog/commit/abc123...
File: https://github.com/ry-ops/blog/blob/main/src/content/posts/2026-02-09-building-mcp-servers-with-typescript.md

Cloudflare Pages will auto-deploy shortly.
Expected live URL: https://ry-ops.dev/posts/2026-02-09-building-mcp-servers-with-typescript/
  1. Clears pending draft from workflow static data

The System Prompt Strategy

The agent’s system prompt is deceptively simple:

You are Ryan's AI blog writing assistant. Your workflow:

1. When given a topic, ALWAYS use search_blog_posts first
2. Report findings: related posts, duplicates, gaps
3. Call generate_draft with topic + research summary
4. Show preview to Ryan
5. If Ryan approves, call save_to_github
6. If Ryan requests changes, call generate_draft again with feedback

Never generate content yourself. Always use the generate_draft tool.
Be conversational. Acknowledge research findings.

Why it works:

  • The agent doesn’t try to write — it orchestrates
  • All content generation happens in the specialized generate_draft tool
  • The agent handles conversational flow, tool calling, and decision-making
  • Clear workflow: search → generate → review → save

The generate_draft tool’s prompt is where the heavy lifting happens: 2,000+ tokens of schema rules, style samples, and validation logic.


Docker Compose Changes

Added two environment variables to n8n-fabric/docker-compose.yml:

services:
  n8n:
    environment:
      - NODE_FUNCTION_ALLOW_BUILTIN=fs,path
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
      - GITHUB_TOKEN=${GITHUB_TOKEN}

ANTHROPIC_API_KEY — Same key as the existing Anthropic credential, needed for direct API calls in generate_draft

GITHUB_TOKEN — Personal Access Token with repo scope for ry-ops/blog, needed for save_to_github

These are loaded from .env file (not committed to git):

ANTHROPIC_API_KEY=sk-ant-...
GITHUB_TOKEN=ghp_...

One docker-compose restart n8n later, the workflow has API access.


Why GitHub Contents API Instead of Git Commands?

The blog posts live at /Users/ryandahlberg/Projects/blog/src/content/posts/ on my machine. The n8n container has them mounted read-only at /data/blog-posts/.

Option 1: Make the mount read-write, git commit inside container

  • Requires changing Docker mount from :ro to :rw
  • Git operations inside container (messy)
  • Potential file permission conflicts
  • Manual push needed

Option 2: GitHub Contents API

  • No mount changes needed
  • Atomic commit via API
  • Proper commit messages with co-authorship
  • Auto-triggers Cloudflare Pages
  • Works from anywhere (not just my machine)

The API approach is cleaner. The workflow creates commits directly on main, Cloudflare sees the push, and the site rebuilds automatically.


The Interaction Model

Session 1: New Post

Me: Write about building MCP servers with TypeScript

Agent: Let me search existing posts first...
[searches Qdrant]

I found 6 related posts:
- Building n8n-fabric MCP Server
- Checkmk MCP Server with Python
- MCP Protocol Overview
[...]

I'll generate a draft that references these and focuses on TypeScript-specific implementation patterns.
[calls generate_draft]

Draft generated! Here's the preview:
[shows frontmatter + excerpt]

Reply "save" to commit or provide feedback.

Me: save

Agent: [calls save_to_github]
✓ Committed to GitHub!
Cloudflare Pages will deploy shortly.

Session 2: Refinement

Me: The description is too technical, make it more accessible

Agent: [calls generate_draft with feedback: "less technical, more accessible description"]

Updated draft:
[shows new preview with friendlier description]

Me: perfect, save it

Agent: [saves to GitHub]
Done!

Session 3: Multi-turn Evolution

Me: Write about Kubernetes security

Agent: [searches, finds 3 related posts, generates draft]

Me: Add a section on network policies

Agent: [regenerates with that addition]

Me: Also mention OPA for policy enforcement

Agent: [regenerates again]

Me: Good, save it

Agent: [commits to GitHub]

The 20-message memory buffer makes this feel natural. I can iterate on drafts conversationally instead of re-specifying everything each time.


Component 1 of 5

This is just the foundation. The complete system will have five components, each a separate Code Tool added to this workflow:

C1: Draft Generation & GitHub Commit ✓ (this post)

  • Research existing posts
  • Generate validated markdown
  • Commit to GitHub

C2: Animated SVG Hero Generation (next)

  • Analyze post content
  • Generate animated SVG per /Users/ryandahlberg/.claude/specs/svg-hero.md
  • Commit via GitHub API
  • Update post frontmatter with hero path

C3: OG Image Generation

  • SVG → JPG conversion at 1200x630
  • Sharp library (either in container or on host)
  • Upload to /public/images/posts/
  • Update post frontmatter with OG path

C4: Publish Schedule Optimization

  • Analyze historical post cadence
  • Suggest optimal publish date/time
  • Update frontmatter date field
  • Explain reasoning

C5: Deployment Verification

  • Poll Cloudflare Pages API for build status
  • Report build errors if any
  • Suggest fixes for common issues
  • Confirm live URL accessibility

Each component = one new Code Tool node + system prompt update telling the agent about the new capability.


Architecture Diagrams

Component Integration Flow

flowchart TD
    User([User: Paste Topic])

    User --> Chat[Chat Trigger]
    Chat --> Agent[Blog Writer Agent]

    Agent --> Search{search_blog_posts}
    Search --> Qdrant[(Qdrant Cloud<br/>2,911 vectors)]
    Search --> Research[Research Summary]

    Research --> Generate{generate_draft}
    Generate --> Anthropic[Anthropic API<br/>Claude Sonnet 4.5]
    Anthropic --> Draft[Validated Draft<br/>Zod schema checked]

    Draft --> Review{User Review}
    Review -->|approve| Save{save_to_github}
    Review -->|revise| Generate

    Save --> GitHub[GitHub Contents API<br/>Commit to main]
    GitHub --> Cloudflare[Cloudflare Pages<br/>Auto-deploy]
    Cloudflare --> Live([Live Post])

Draft Generation Pipeline

sequenceDiagram
    participant User
    participant Agent
    participant SearchTool as search_blog_posts
    participant Qdrant
    participant DraftTool as generate_draft
    participant Anthropic
    participant SaveTool as save_to_github
    participant GitHub

    User->>Agent: "Write about MCP servers"
    Agent->>SearchTool: search("MCP servers")
    SearchTool->>Qdrant: vector search
    Qdrant-->>SearchTool: 10 related posts
    SearchTool-->>Agent: research results

    Agent->>DraftTool: generate_draft(topic, research)
    DraftTool->>Anthropic: API call with system prompt
    Anthropic-->>DraftTool: markdown + frontmatter
    DraftTool->>DraftTool: validate Zod schema
    DraftTool->>DraftTool: generate slug
    DraftTool->>DraftTool: fix hero paths
    DraftTool-->>Agent: formatted preview

    Agent-->>User: show draft preview
    User->>Agent: "save"

    Agent->>SaveTool: save_to_github()
    SaveTool->>GitHub: PUT /repos/.../contents/...
    GitHub-->>SaveTool: commit SHA
    SaveTool-->>Agent: commit URL
    Agent-->>User: "✓ Committed! Deploying..."

System Prompt Strategy

graph TB
    subgraph "Agent System Prompt (Simple)"
        AgentRole[Role: Blog Writing Assistant]
        AgentFlow[Workflow: search → generate → review → save]
        AgentRules[Rules: Never generate content directly<br/>Always use tools<br/>Be conversational]
    end

    subgraph "generate_draft Tool Prompt (Complex)"
        Voice[Ryan's Writing Voice<br/>3 style samples]
        Schema[Complete Zod Schema<br/>8 category enums]
        Rules[Frontmatter Rules<br/>Description: 150-160 chars<br/>Hero paths: /images/posts/{date}-{slug}-hero.svg]
        Examples[Frontmatter Template<br/>from POST_TEMPLATE.md]
        Validation[Slug Generation<br/>Format Validation<br/>Error Correction]
    end

    AgentRole --> Tool[Agent calls generate_draft tool]
    AgentFlow --> Tool
    AgentRules --> Tool

    Tool --> Voice
    Tool --> Schema
    Tool --> Rules
    Tool --> Examples
    Tool --> Validation

    Validation --> Output[Production-Ready Markdown]

The Production Reality

What works beautifully:

  • Research before generating (no duplicate posts)
  • Schema validation (every draft passes Zod checks)
  • GitHub integration (atomic commits, auto-deploy)
  • Conversational refinement (20-message memory)

What required iteration:

  • Prompt engineering — Took ~8 iterations to get descriptions consistently 150-160 chars
  • Slug generation — Had to add explicit rules for handling special characters
  • Hero path consistency — Initially generated paths didn’t match the expected pattern
  • Category validation — Claude occasionally invents categories; explicit enum list fixed it

What surprised me:

  • The agent almost never needs to be told to search first — it just does it
  • Multi-turn refinements work better than trying to get it perfect on the first try
  • Separating the agent from the draft generator massively improved reliability
  • GitHub Contents API is simpler than I expected

Future Enhancements

Near-term (Components 2-5)

  • SVG hero generation — Text-free animated heroes following existing patterns
  • OG image creation — 1200x630 JPG social cards
  • Schedule optimization — Analyze post cadence, suggest publish times
  • Deployment verification — Poll Cloudflare, report build status

Medium-term

  • Multi-language support — Generate posts in multiple languages simultaneously
  • SEO optimization — Suggest internal links, optimize descriptions
  • Draft versioning — Store multiple drafts, allow comparison
  • Collaborative review — Share drafts via URL for external feedback

Long-term

  • Content planning — AI analyzes gaps in coverage, suggests topics
  • Series detection — Recognize multi-part series, maintain consistency
  • Performance analysis — Track which posts perform well, learn patterns
  • Auto-updating — Monitor industry news, suggest updates to existing posts

The Code

The complete workflow JSON is ~800 lines. Key excerpts:

generate_draft tool (simplified):

const systemPrompt = `You are generating blog posts for Ryan Dahlberg's technical blog.

CRITICAL RULES:
- Category MUST be one of: ${VALID_CATEGORIES.join(', ')}
- Description MUST be 150-160 characters (not 149, not 161)
- Hero path MUST be: /images/posts/{date}-{slug}-hero.svg
- Author name: Ryan Dahlberg
- Author avatar: /ryan-dahlberg-avatar.svg
- generated: true
- NO # title at start of body (title is in frontmatter)
- Write in first person, technical voice
- Reference research findings when relevant

ZOD SCHEMA:
${zodSchemaString}

RESEARCH FINDINGS:
${researchSummary}

Generate a complete blog post on: ${topic}
${feedback ? `\nREVISION FEEDBACK: ${feedback}` : ''}`;

const response = await fetch('https://api.anthropic.com/v1/messages', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'x-api-key': $env.ANTHROPIC_API_KEY,
    'anthropic-version': '2023-06-01'
  },
  body: JSON.stringify({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 4096,
    system: systemPrompt,
    messages: [{ role: 'user', content: userPrompt }]
  })
});

const result = await response.json();
const markdown = result.content[0].text;

// Validate, generate slug, fix paths, store in workflow static data
// ...

return `Draft generated! [preview]`;

save_to_github tool (simplified):

const draft = $static.pendingDraft;
const base64Content = Buffer.from(draft.markdown).toString('base64');
const filename = `${draft.date}-${draft.slug}.md`;

const response = await fetch(
  `https://api.github.com/repos/ry-ops/blog/contents/src/content/posts/${filename}`,
  {
    method: 'PUT',
    headers: {
      'Authorization': `Bearer ${$env.GITHUB_TOKEN}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      message: `feat(blog): add post - ${draft.title}\n\nCo-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>`,
      content: base64Content,
      branch: 'main'
    })
  }
);

return `✓ Committed! [details]`;

The Takeaway

This isn’t a prototype. It’s production infrastructure. Every post generated by this system:

  • Passes Zod validation
  • Has the correct frontmatter structure
  • Uses valid category enums
  • Has a 150-160 character description
  • Has proper hero image paths
  • Is committed atomically to GitHub
  • Auto-deploys to Cloudflare Pages

The magic isn’t in the complexity — it’s in the separation of concerns:

  • Agent orchestrates
  • Tools specialize
  • Each Code Tool does one thing well
  • GitHub API handles version control
  • Cloudflare handles deployment

I can now write a blog post by pasting a topic into a chat interface. The system researches, generates, validates, and publishes. I review and approve. That’s it.

And this is just Component 1. When all five components are live, I’ll paste a topic and get a complete post with an animated hero, social card, optimized schedule, and verified deployment.

That’s the future of technical writing: you provide the ideas, AI handles the production.

#AI Agents #n8n #Claude #GitHub API #RAG #Automation #Publishing #Qdrant