When I first built npx git-steer scan, it did exactly one thing: loop through repos, hit the Dependabot REST API, and print what it found. No persistence. No deduplication. No dashboard. Just terminal output that vanished the moment you closed the window.

That was fine for a quick check. But as the number of managed repos grew and the security workflow matured, “fine” stopped being enough. The MCP server had already moved to a proper Advisory Database pipeline with queue persistence and dashboard tracking — but the CLI was stuck in the past. Two scan paths, two data sources, zero shared state.

This post walks through how I unified them by routing the CLI through the same git-fabric gateway that powers the MCP server, and what that unlocks going forward.

Where It Started

The original scan command was straightforward. Authenticate via the GitHub App, iterate repos sequentially, call GET /repos/{owner}/{repo}/dependabot/alerts, filter by severity, and print a table.

npx git-steer scan --severity high
flowchart LR
    CLI["npx git-steer scan"]
    KC["macOS Keychain"]
    GH["GitHub REST API"]
    R1["Repo 1"]
    R2["Repo 2"]
    R3["Repo N"]
    OUT["Terminal Output"]

    CLI -->|"1 . Read credentials"| KC
    CLI -->|"2 . GET /dependabot/alerts"| GH
    GH --> R1
    GH --> R2
    GH --> R3
    R1 -->|sequential| CLI
    R2 -->|sequential| CLI
    R3 -->|sequential| CLI
    CLI -->|"3 . Print and exit"| OUT

    style CLI fill:#1a1a2e,stroke:#e94560,color:#fff
    style KC fill:#16213e,stroke:#0f3460,color:#fff
    style GH fill:#16213e,stroke:#0f3460,color:#fff
    style OUT fill:#0f3460,stroke:#533483,color:#fff
    style R1 fill:#1a1a2e,stroke:#533483,color:#fff
    style R2 fill:#1a1a2e,stroke:#533483,color:#fff
    style R3 fill:#1a1a2e,stroke:#533483,color:#fff

What worked

  • Zero configuration beyond git-steer init
  • Pretty chalk output with severity coloring
  • Quick spot-check from the terminal

What didn’t

  • Sequential execution — each repo waited for the last one to finish
  • Dependabot-only — no Advisory Database coverage, no NVD enrichment
  • Fire-and-forget — findings disappeared when the terminal closed
  • No deduplication — running it twice would show the same CVEs with no memory of what was already known
  • Disconnected from the MCP server — the MCP path had its own scan pipeline with queue persistence, but the CLI knew nothing about it
  • No dashboard refresh — you had to manually trigger dashboard_generate through the MCP server to update the security dashboard

The root problem was architectural. The gateway — the thing that routes tool calls through @git-fabric/cve to the Advisory Database — only lived inside MCPServer.start(). The GitSteer class that both the MCP server and CLI share had no access to it.

The Gateway Problem

git-steer’s architecture has two entry points that both need the same capabilities:

  1. MCP Server — Claude Desktop calls tools like fabric_cve_scan which route through the gateway
  2. CLInpx git-steer scan runs from the terminal

Before this change, only the MCP server initialized the gateway. The CLI had to re-implement scanning from scratch using raw REST calls.

There was also a token bridge problem. The CLI authenticates via GitHub App credentials stored in macOS Keychain, but @git-fabric/cve expects a raw GITHUB_TOKEN string. The App’s Octokit instance uses installation-based auth internally — there’s no exposed token to hand off.

The Fix: Lift the Gateway

The solution was three changes:

1. Bridge App Auth to a Raw Token

Added getInstallationToken() to the GitHub client. It calls into @octokit/auth-app’s internal auth mechanism to extract the installation token as a plain string. The auth library caches tokens with a 1-hour TTL, so repeated calls are essentially free.

async getInstallationToken(): Promise<string> {
  const octokit = this.ensureAuth();
  const auth = await (octokit as any).auth({ type: 'installation' });
  return auth.token;
}

2. Move Gateway Ownership to GitSteer

The GitSteer class now owns the gateway lifecycle. A new initFabricGateway() method resolves the token (preferring env vars for CI, falling back to the installation token for CLI), discovers managed repos from state, and initializes the gateway.

async initFabricGateway(): Promise<GatewayHandle> {
  const token = process.env.GITHUB_TOKEN
    ?? process.env.GIT_STEER_TOKEN
    ?? await this.github.getInstallationToken();

  this.gateway = await initGateway({
    githubToken: token,
    stateRepo: this.state.getStateRepo(),
    managedRepos,
  });
  return this.gateway;
}

The MCP server now receives the gateway through its config instead of creating its own. If none is provided (backwards compat for direct MCPServer usage), it self-initializes as before.

3. Rewrite the CLI Scan Command

The scan command now follows the same path as the MCP tool:

npx git-steer scan                    # Full scan + queue + dashboard
npx git-steer scan --dry-run          # Scan without persisting
npx git-steer scan --no-dashboard     # Skip dashboard refresh
npx git-steer scan --repo ry-ops/blog # Single repo

Where It Is Now

flowchart LR
    CLI["npx git-steer scan"]
    KC["macOS Keychain"]
    GS["GitSteer Engine"]
    GW["git-fabric Gateway"]
    CVE["@git-fabric/cve"]
    ADB["GitHub Advisory DB"]
    Q["cve-queue.jsonl"]
    ST["State Repo"]
    DASH["Dashboard . GitHub Pages"]
    OUT["Terminal Output"]

    CLI -->|"1 . Credentials"| KC
    KC --> GS
    GS -->|"2 . initFabricGateway()"| GW
    GW -->|"3 . route cve_scan"| CVE
    CVE -->|"4 . Advisory lookup"| ADB
    ADB -->|findings| CVE
    CVE -->|"5 . Dedupe + queue"| Q
    Q --> ST
    GS -->|"6 . refreshDashboard()"| DASH
    GS -->|"7 . Pretty output"| OUT

    style CLI fill:#1a1a2e,stroke:#e94560,color:#fff
    style KC fill:#16213e,stroke:#0f3460,color:#fff
    style GS fill:#0d1117,stroke:#58a6ff,color:#fff
    style GW fill:#161b22,stroke:#f78166,color:#fff
    style CVE fill:#161b22,stroke:#f78166,color:#fff
    style ADB fill:#16213e,stroke:#0f3460,color:#fff
    style Q fill:#0f3460,stroke:#533483,color:#fff
    style ST fill:#0f3460,stroke:#533483,color:#fff
    style DASH fill:#238636,stroke:#2ea043,color:#fff
    style OUT fill:#0f3460,stroke:#533483,color:#fff

The same command now:

  • Routes through the git-fabric gateway to the GitHub Advisory Database
  • Deduplicates findings against the existing CVE queue
  • Persists new findings to cve-queue.jsonl in the state repo
  • Audits the scan in the state log with full telemetry
  • Refreshes the dashboard on GitHub Pages with updated metrics
  • Produces the same pretty terminal output developers expect

Full Execution Flow

Here’s the complete sequence of what happens when you run npx git-steer scan:

sequenceDiagram
    participant U as Developer
    participant CLI as git-steer CLI
    participant KC as macOS Keychain
    participant GS as GitSteer Engine
    participant GW as git-fabric Gateway
    participant CVE as @git-fabric/cve
    participant GH as GitHub API
    participant ST as State Repo
    participant PG as GitHub Pages

    U->>CLI: npx git-steer scan
    CLI->>KC: Read App credentials
    KC-->>CLI: appId, privateKey, installationId
    CLI->>GS: syncState()
    GS->>GH: Fetch state/config.json
    GH-->>GS: Managed repos, policies

    CLI->>GS: initFabricGateway()
    GS->>GS: getInstallationToken()
    GS->>GW: Initialize with token + repos
    GW->>CVE: createApp()
    CVE-->>GW: CVE app registered

    CLI->>GW: router.route('cve_scan', args)
    GW->>CVE: execute(cve_scan)
    CVE->>GH: Advisory DB queries
    GH-->>CVE: Vulnerability data
    CVE->>CVE: Deduplicate findings
    CVE->>ST: Append cve-queue.jsonl
    CVE-->>GW: DetectionResult
    GW-->>CLI: Scan results

    CLI->>CLI: Pretty-print findings
    CLI->>GS: addAuditEntry('cli_scan')
    CLI->>GS: forceSyncState()
    GS->>GH: Push state updates
    CLI->>GS: refreshDashboard()
    GS->>PG: Deploy index.html
    PG-->>CLI: Dashboard URL
    CLI->>U: Dashboard updated

The key insight: every scan — whether triggered from Claude Desktop via MCP or from a developer’s terminal via CLI — now flows through the same pipeline. Same data source, same deduplication logic, same queue, same dashboard.

Terminal Output

The CLI preserves the developer-friendly output format while adding queue and dashboard information:

$ npx git-steer scan

✔ Scan complete: 5 repos scanned via Advisory DB

  3 vulnerabilities across 2 repos:

  CRITICAL: 1
  HIGH:     2

  ry-ops/git-steer (2 alerts)
    CRITICAL  ajv → fix: 8.18.0
    HIGH      express → fix: 4.21.2

  ry-ops/blog (1 alert)
    HIGH      next → fix: 15.2.1

  3 findings queued to cve-queue.jsonl

✔ State saved
✔ Dashboard updated: https://ry-ops.github.io/git-steer-state/

Commands Reference

CommandDescription
npx git-steer scanFull scan: Advisory DB + queue + dashboard
npx git-steer scan --dry-runScan and report only, no persistence
npx git-steer scan --no-dashboardSkip the dashboard refresh step
npx git-steer scan --repo owner/nameScan a single repository
npx git-steer scan --severity CRITICALOverride minimum severity threshold

The severity default changed from all to HIGH to match the gateway’s convention. Most teams don’t need LOW/MEDIUM noise in their terminal — those are better surfaced in the dashboard.

How git-fabric Takes This Further

The scan is just Phase 1. The git-fabric gateway exposes a full CVE lifecycle through six tools that chain together:

flowchart TB
    subgraph SCAN["Phase 1 . Detect"]
        S1["cve_scan"] --> S2["Advisory DB Lookup"]
        S2 --> S3["Queue to cve-queue.jsonl"]
    end

    subgraph ENRICH["Phase 2 . Enrich"]
        E1["cve_enrich"] --> E2["NVD API Lookup"]
        E2 --> E3["CVSS . CWE . References"]
    end

    subgraph TRIAGE["Phase 3 . Triage"]
        T1["cve_triage"] --> T2["Apply Severity Policy"]
        T2 --> T3{"CRITICAL?"}
        T3 -->|Yes| T4["Confirmed PR"]
        T3 -->|No| T5["Draft PR"]
    end

    subgraph OPS["Phase 4 . Operate"]
        O1["cve_compact"] --> O2["Prune Resolved"]
        O3["cve_queue_stats"] --> O4["Dashboard Metrics"]
    end

    SCAN --> ENRICH
    ENRICH --> TRIAGE
    TRIAGE --> OPS

    style SCAN fill:#0d1117,stroke:#f78166,color:#fff
    style ENRICH fill:#0d1117,stroke:#58a6ff,color:#fff
    style TRIAGE fill:#0d1117,stroke:#238636,color:#fff
    style OPS fill:#0d1117,stroke:#a371f7,color:#fff
    style S1 fill:#161b22,stroke:#f78166,color:#fff
    style S2 fill:#161b22,stroke:#f78166,color:#fff
    style S3 fill:#161b22,stroke:#f78166,color:#fff
    style E1 fill:#161b22,stroke:#58a6ff,color:#fff
    style E2 fill:#161b22,stroke:#58a6ff,color:#fff
    style E3 fill:#161b22,stroke:#58a6ff,color:#fff
    style T1 fill:#161b22,stroke:#238636,color:#fff
    style T2 fill:#161b22,stroke:#238636,color:#fff
    style T3 fill:#161b22,stroke:#238636,color:#fff
    style T4 fill:#238636,stroke:#2ea043,color:#fff
    style T5 fill:#161b22,stroke:#238636,color:#fff
    style O1 fill:#161b22,stroke:#a371f7,color:#fff
    style O2 fill:#161b22,stroke:#a371f7,color:#fff
    style O3 fill:#161b22,stroke:#a371f7,color:#fff
    style O4 fill:#161b22,stroke:#a371f7,color:#fff

Phase 1: Detect (cve_scan)

This is what the CLI now invokes. Scans managed repos against the GitHub Advisory Database, deduplicates against the existing queue, and persists new findings to cve-queue.jsonl.

Phase 2: Enrich (cve_enrich)

Takes a CVE ID and fetches enriched data from NVD: CVSS vector, CWE classification, affected configurations, and external references. This turns a queue entry from “there’s a vulnerability in lodash” into a fully contextualized risk assessment.

Phase 3: Triage (cve_triage)

Processes pending queue entries through a severity policy. CRITICAL findings get confirmed PRs opened immediately. HIGH findings get draft PRs. The policy is configurable — you can adjust thresholds or set repos to auto-merge critical fixes.

Phase 4: Operate (cve_compact + cve_queue_stats)

Housekeeping tools that keep the queue healthy. cve_compact prunes resolved entries older than a configurable retention period (default: 30 days). cve_queue_stats feeds the dashboard with totals by status and severity, oldest pending entry, and top affected repos.

The Gateway as a Router

All six tools are registered as a single @git-fabric/cve app in the gateway’s registry. The gateway’s router handles dispatch:

const result = await gateway.router.route('cve_scan', {
  severity_threshold: 'HIGH',
  dry_run: false,
});

The app pattern means new capabilities — say, a SAST scanner or a license compliance checker — plug in as additional apps without touching the router or the CLI. Register the app, and its tools become available everywhere the gateway is initialized.

What Changed, What Didn’t

AspectBeforeAfter
Data sourceDependabot REST APIGitHub Advisory Database
ExecutionSequential per-repoGateway-routed, parallelized
PersistenceNone (terminal only)cve-queue.jsonl in state repo
DeduplicationNoneAutomatic via queue
DashboardManual trigger via MCPAutomatic on every scan
Audit trailNoneState log with telemetry
Severity defaultallHIGH
Terminal outputSame styleSame style + queue/dashboard info
Auth mechanismApp credentials via KeychainSame (token bridged to gateway)

What didn’t change: the developer experience. npx git-steer scan still works exactly as before. Same command, same pretty output, same severity coloring. The difference is everything that happens behind the scenes.

Takeaway

The pattern here is worth calling out: don’t build separate pipelines for separate entry points. The CLI and MCP server are different interfaces to the same engine. When I let them diverge — one using Dependabot REST, the other using Advisory DB via the gateway — every improvement had to be implemented twice, and state never converged.

Lifting the gateway into the shared GitSteer class was a small architectural change (four files, ~330 lines), but it eliminated an entire category of drift. Now there’s one scan pipeline, one queue, one source of truth. Whether you’re an AI assistant calling fabric_cve_scan or a developer running npx git-steer scan, the data flows through the same path and lands in the same place.

That’s the real value of git-fabric: not just better scanning, but a composable gateway that makes it trivial to share capabilities across every surface that needs them.