What Is fabric-forge?

fabric-forge is a personal engineering organization — a one-person operation running at the intersection of infrastructure, content, and automation. The repositories under its roof are not products in the traditional sense. They are systems. Systems for thinking out loud. Systems for staying sharp. Systems for turning years of homelab experience, platform engineering work, and technical writing into something that reaches people.

At the center of it is ry-ops.dev — a technical blog covering Kubernetes, GitOps, platform engineering, security, and the messy human reality of running modern infrastructure. Over 250 published posts. Years of accumulated thinking. A backlog that most humans would call a problem and most automation systems would call a goldmine.

fabric-forge exists to make that goldmine work for itself.

What Is fabric/social?

fabric/social is the content operations platform that runs ry-ops.dev’s presence on LinkedIn — and increasingly, everywhere else.

It is not a scheduler. It is not a template engine. It is a system that reads your own writing, understands what you have already said, figures out what angle has not been taken yet, writes something new and relevant, scores it for quality, and — if it clears the bar — publishes it without asking permission.

The human loop is intentionally thin. You should only need to touch it when something genuinely requires judgment.

Every weekday morning at 6:00 AM UTC, GitHub Actions wakes up and runs the autopilot. It pulls candidates from the blog archive and live RSS feeds, generates post variants using Claude Sonnet, runs a second-pass relevance score (to catch model self-rating bias), and routes the result:

  • Score 9.0 and above — auto-published directly to LinkedIn. No human in the loop.
  • Score 4.0 to 8.9 — saved as a draft and surfaced in the Alert Center for human review.
  • Score below 4.0 — blocked. An alert is emitted. Nothing goes out.

Every Sunday the same system generates long-form article posts from deep archive content — posts that deserve a second life told from a new angle.

Every 15 minutes, a dispatch job checks whether any scheduled post is due and publishes it.

Every morning at 8:00 AM UTC, a metrics refresh job pulls impression, reach, reaction, comment, and reshare data from the LinkedIn analytics API and stores it back against each published post.

The Stack

The runtime is TypeScript ESM on Node 20. The API layer is Fastify — fast, typed, zero ceremony. Redis handles the queue: sorted sets as the schedule queue, dedup sets, token storage. Qdrant Cloud tracks every narrative angle ever used per source URL, preventing repetition across hundreds of posts. Anthropic Claude Sonnet handles generation and scoring. OpenAI text-embedding-3-small provides semantic dedup at a 0.92 cosine threshold. LinkedIn OAuth 2.0 and the Posts API handle distribution. GitHub Actions handles scheduling — no n8n, no cron daemon, no k3s required.

Zero-footprint by design. All persistent state lives in managed cloud services. The API process is stateless and replaceable. Stop it, move it, restart it — nothing is lost.

The Human Interface: The Workshop

The UI — called “the Workshop” internally — is a React application that gives a human operator full visibility and control without requiring them to touch the terminal.

It covers everything: a dashboard for system health and alert counts, a calendar for scheduled posts, a queue manager for pending and published content, the full blog archive with repost candidates, an articles page for Sunday’s long-form pieces, RSS feed management, an alert center for token expiry and low-relevance flags, autopilot run history with manual trigger capability, LinkedIn OAuth status, analytics from both LinkedIn and Cloudflare, a multi-channel publish interface for newsletter, podcast, YouTube, and whitepapers, and system setup with readiness checks.

Plus Aiana — a conversational AI assistant that sits on every page, always present, with full system context.

Every page is backed by a live API. Every mutation is reflected in real time. The human’s job is to review what the machine flags, not to drive the machine.

From Obstacles to Teammates: The Real Story

Here is what nobody tells you about building automation: the automation itself shows you where the process is broken. Not through failure. Through honesty.

Stage 1 — The Obstacle

The first version of the autopilot was simple: fetch a blog post, generate a LinkedIn post, publish it. Fast to build. Fast to break.

The first real obstacle was not technical. It was this: the system published something that linked to the wrong URL. Not a bug in the code — a stale record in the archive. The domain had changed (ryops.com to ry-ops.dev) and the automation did not know that. It just followed the data it had.

A human would have caught it. The automation did not, because nobody told it to look. The lesson: automation follows data. Bad data means bad output. Fix the data source. Validate at the boundary.

Stage 2 — The Hurdle

The next version added relevance scoring. Let Claude score its own output before publishing. Smart in theory. In practice: the model rated its own posts at 9.2 and above, reliably. Of course it did. It wrote them.

This is not a model failure. This is a system design failure. The system was asking the generator to also be the judge. That never works — not in software, not in organizations.

The fix was second-pass scoring: a separate evaluation call, blind to the generation context. The generator proposes; an independent scorer decides.

The second hurdle was publishing without review. The first time the system auto-published something, it used the wrong URL — already caught and fixed. The second time, it published without the operator knowing it was about to. The operator said “launch a test” meaning test the generation flow. The system heard publish.

The fix was an explicit approval workflow. Generate, preview, operator picks variant by ID, then and only then publish.

Stage 3 — The Conversation

After fixing the generator/judge problem and the approval workflow, something interesting happened: the system started surfacing patterns.

It flagged posts that scored 6.5 and asked for review. It flagged token expiry two weeks in advance. It told you when the RSS feeds had not been refreshed. It told you when the archive had not been synced.

These were not errors. They were conversations. The system, having been given the right feedback mechanisms, started participating in the process rather than just executing it. This is the moment the automation stops being a tool and starts being a colleague.

Stage 4 — The Agreement

With the right feedback loops in place, it became possible to make explicit agreements about how the system should behave:

  • If score is 9.0 or above: publish automatically. I trust you.
  • If score is 4.0 to 8.9: show me first. I will decide.
  • If score is below 4.0: block it. Do not waste my time.
  • Once a week: go deep. Pull from the archive. Write something long.
  • Every morning: check the queue, check the feeds, check the tokens.

These are not configuration values. They are agreements. And agreements only happen between parties who trust each other enough to define them.

Stage 5 — No More Barriers

The final stage is where you stop thinking about the automation as something you manage and start thinking of it as something you work with.

The system generates. You review what it surfaces. When something is great, it ships without friction. When something needs judgment, it asks. When something fails, it tells you exactly why and gives you a Retry button.

The barriers — between idea and publication, between blog post and LinkedIn post, between data and insight — collapse. Not because everything is automated, but because the automation knows its own limits and communicates them clearly.

Before fabric/social, publishing to LinkedIn meant days of delay, manual copy-paste, hashtag research, and engagement metrics that got noted or forgotten. After fabric/social, a blog post triggers generation and scoring the next weekday morning. High-confidence content ships automatically. Everything else takes thirty seconds of human review. Metrics get pulled and stored the next morning. Performance data, angle tracking, and reach trends are visible immediately in the Workshop.

The human’s time is spent on judgment, not logistics.

What fabric/social Will Manage Tomorrow

The Workshop is not finished. It is working. There is a difference.

The roadmap is short and additive — everything that follows builds on what already exists: full UI visibility into the RSS pipeline, review for Sunday’s long-form content before it ships, calendar post preview, published post detail to close the feedback loop between content and performance, topic and angle performance aggregation to learn which ideas resonate, blog draft integration to connect creation to distribution in a single workflow, and whitepaper commissioning from thesis detection to full draft in one action.

Each of these is not a new system. Each is a window that did not exist before — a new surface through which a human and an automation can look at the same thing at the same time.

The Bigger Idea

fabric-forge started as a homelab. It became a content operation. It is becoming a model for how a single engineer, working with the right AI tools and the right architectural discipline, can operate at the output level of a small team without the coordination overhead of one.

This is not about replacing humans. It is about removing the parts of the work that do not require a human — and doing them so reliably that the human can focus entirely on the parts that do.

The automation builds the road. You decide where it goes.