There’s a version of this story where I keep building MCP servers. Where git-steer grows into a 40-tool monolith, where I wire in more and more domains, where every new capability is another register() call in the same file. That version exists. I was on track for it.

I took a different turn.

Where this started

git-steer started as a lean GitHub orchestrator. Repos, branches, PRs, Actions — the control plane for my GitHub account via MCP. It worked well. So I added CVE scanning. Then enrichment. Then triage logic, queue management, batch operations. Before I noticed, the security domain alone was 720 lines spread across two files that had no business living in an MCP server.

I wrote about the extraction in Introducing git-fabric. The short version: I pulled the CVE logic out into its own app — security-fabric — and gave git-steer a gateway to call it. One domain, one binary, one clean interface. The gateway pattern worked.

Then I started thinking about what else could be its own app.

The MCP trap

Here’s the thing nobody tells you when you start building MCP servers: the protocol is excellent for what it does. Tool registration, schema validation, stdin/stdout transport, Claude integration — it’s a clean, well-designed spec. I’m not here to argue against MCP.

But the playbook that emerged around it — every new capability is a new MCP tool in an existing server — has a ceiling. You hit it when:

  • Your tool count creeps past 15-20 and discovery becomes noise
  • Domain logic starts bleeding between tools that shouldn’t know about each other
  • You need to run parts of the system independently but can’t because it’s all one process
  • Testing a single domain requires standing up the entire server

The MCP server becomes the monolith. You’ve just moved the monolith from your application layer to your AI tooling layer.

I’d built that monolith. I had to blow it up.

What fabric apps are instead

A fabric app isn’t an MCP server. It’s an opinionated TypeScript (or Python) service with:

  • Its own domain — one app, one problem
  • Its own state — Redis keys prefixed to the app, Qdrant collections owned by the app
  • Its own API — Fastify REST, consumed by the gateway and by UIs
  • A gateway registration — so Claude can still reach it via MCP when needed, but that’s not its primary interface

The gateway (git-fabric) sits in front. It knows about all the fabric apps. Claude talks to the gateway. The gateway routes. Fabric apps don’t need to know about each other.

This is the part that felt obvious in retrospect but took me too long to see: MCP is a transport, not an architecture. Once you accept that, everything else follows. You build the real system — API, state, business logic — and then expose parts of it via MCP as a secondary concern.

fabric-forge

So I created an org. fabric-forge on GitHub. Private repos for now, but the pattern is public.

Three repos live there already:

fabric-forge/blog — The ry-ops.dev blog, mirrored and cleaned. Spring-cleaned from 83 files of dead weight: two MCP servers that had accumulated inside the blog repo (yes, I built an MCP server into my Astro static site, don’t ask), a Buffer social publisher, Python migration scripts, stale markdown. What’s left is lean — Astro 5, Tailwind v4, the content, and the scripts that actually run.

fabric-forge/social — FABRIC/SOCIAL. A LinkedIn publishing automation system that took a different approach to the MCP problem entirely. No MCP server at the core. The core is: Redis sorted set as a schedule queue, Qdrant for narrative angle memory (so you don’t write the same angle twice for the same source), Claude for post generation and relevance scoring, Fastify for the API. GitHub Actions handles scheduling triggers. MCP is available as a secondary interface via stdio transport — 23 tools — but it’s not the backbone. The backbone is the API.

fabric-forge/social-ui — The React frontend for FABRIC/SOCIAL. Calendar view with drag-drop scheduling, variant studio (three LinkedIn post variants per source with different angles), archive browser for 233+ blog posts with recency gap scoring, autopilot dashboard. Talks to the backend at :4200.

What we built in one session

Here’s the part I’m still processing.

In roughly one extended session, with Claude Code doing most of the implementation:

  • Extracted the social publisher concept from the dead files in the blog repo
  • Designed the FABRIC/SOCIAL architecture from scratch
  • Built the full TypeScript backend: types, Redis client + key schema, Qdrant client with 1024-dim blog embedding compatibility, Claude AI layer with structured output, LinkedIn OAuth 2.0 flow, RSS feed manager with semantic dedup, scheduler with conflict detection, autopilot engine, Fastify API with 8 route groups, 23-tool MCP server
  • Built the full React frontend: 7 pages, Zustand store, SWR data fetching, FullCalendar drag-drop, the whole palette
  • Fixed a bug where the blog’s src/config/categories.ts got accidentally deleted during spring cleaning (it’s imported by 11 files, build fails hard)
  • Got end-to-end working: RSS feed → Claude relevance score (7.8/10) → 3 LinkedIn variants → OAuth flow → published a real post to LinkedIn

The published post was the story_hook variant. It opened with: “At some point during testing, Git-Steer opened a pull request against a repo I’ve contributed to exactly once, in 2023, that I had not thought about since. I did not tell it to do this.”

Claude scored that one viral_candidate reach tier. We’ll see.

The playbook we replaced it with

The old playbook: build an MCP server, add tools, repeat.

The new playbook:

  1. Domain first — what problem does this solve, what state does it own
  2. API as the primary interface — REST/Fastify, testable, consumable by UIs and other services
  3. State in Redis + Qdrant — no filesystem state, no SQLite, no JSON files
  4. MCP as a secondary surface — register tools at the gateway, but don’t build around it
  5. GitHub Actions for scheduling — no n8n, no cron infrastructure, just .github/workflows/
  6. One UI per domain — React frontend that owns the UX, not a Claude conversation

This isn’t anti-MCP. Claude Code itself is how this all gets built. The MCP protocol is how Claude talks to the fabric apps. But the apps themselves are real software — they have APIs, they have state machines, they have UIs. They don’t exist to serve the AI. The AI is a collaborator in building them.

What’s next

FABRIC/SOCIAL needs LinkedIn OAuth for the Page channel, the archive sync against ryops_blog in Qdrant (2,964 chunks, all indexed, just needs a sync run), and a deployment decision. I’m looking at Cloudflare Pages for the frontend and either a Proxmox VM or a persistent process on the Mac mini for the backend.

git-fabric is getting a social-fabric spoke next — so you can trigger post generation from Claude by describing a topic and having the system pull relevant blog posts from the archive, score them, and generate variants without touching the UI.

And there’s something else forming — a metrics fabric app, an analytics layer that sits across all the fabric apps and surfaces patterns. But that one’s still early.

The thread here isn’t “I built some stuff.” The thread is: the pattern is working. Domain extraction, gateway routing, API-first, Claude as collaborator. Every fabric app that gets built makes the next one faster because the pattern is proven.

git-steer was the thing that proved the concept. fabric-forge is where it goes from here.