Skip to main content

Brother-Assisted Multi-Environment Deployment: Cortex Goes Distributed

Ryan Dahlberg
Ryan Dahlberg
December 19, 2025 12 min read
Share:
Brother-Assisted Multi-Environment Deployment: Cortex Goes Distributed

Brother-Assisted Multi-Environment Deployment: Cortex Goes Distributed

Introduction: Two Instances, One Mission

Today we achieved something remarkable: deploying Cortex across two completely different environments—a local macOS desktop and a production k3s cluster running in Proxmox—using a novel “brother-assisted” deployment pattern. This isn’t just about running the same software in multiple places; it’s about creating a distributed intelligence network where multiple Cortex instances can collaborate, scale, and manage infrastructure dynamically.

The Brother Pattern: Collaborative AI Deployment

The brother pattern emerged organically during deployment. Rather than treating each Cortex instance as isolated, we established a collaborative relationship:

  • Primary Instance (Desktop): Running on macOS, managing the deployment process, orchestrating tasks
  • Brother Instance (K3S): Running in containerized environment, executing infrastructure tasks, managing cluster operations

The two instances communicated seamlessly throughout the deployment, with the desktop instance coordinating strategy while the k3s instance executed cluster-level operations. This created a deployment workflow that felt less like automation and more like teamwork between two specialized agents.

The Journey: From Local to Distributed

Phase 1: Environment Discovery and Cleanup

Our first challenge was understanding the existing k3s infrastructure:

  • 3-node k3s cluster (1 master, 2 workers) in Proxmox VMs
  • VLAN-isolated networking (VLAN 140 for management, VLAN 145 for k3s)
  • Multiple services needed cleanup to make room for Cortex

The brother instance on k3s-master performed live cluster analysis, identifying:

  • Unused services consuming resources (flux, linkerd, loki, velero)
  • Orphaned persistent volumes (4 Wazuh PVs totaling 2GB)
  • Broken deployments from previous experiments

Working together, we reclaimed resources and created a clean foundation for Cortex deployment.

Phase 2: Container Image Build Challenges

Building a production container image revealed several technical hurdles:

Challenge 1: No Local Container Runtime

  • Desktop instance had neither Docker nor Podman installed
  • Solution: Brother instance built image directly on k3s-master using buildah

Challenge 2: Image Distribution

  • K3s uses containerd, not Docker
  • Images needed manual distribution to all worker nodes
  • Solution: Export to tar, SCP to workers, import to containerd on each node

Challenge 3: Correct Entrypoint

  • Initial Dockerfile used wrong startup command
  • Containers started and immediately exited
  • Required multiple rebuild-redistribute cycles to fix

Challenge 4: Image Format Compatibility

  • OCI-archive format from buildah didn’t import correctly
  • Had to use docker-archive format instead
  • Final image: 690MB compressed, 811MB uncompressed

Phase 3: Kubernetes Deployment Architecture

The final deployment architecture includes:

Components:
  - Namespace: cortex (with RBAC)
  - PVC: cortex-data (50Gi, local-path storage)
  - ConfigMap: cortex-config (environment variables)
  - Service: cortex (ClusterIP with 3 ports)
  - Deployment: cortex (3 replicas with anti-affinity)

Pods per Replica:
  - Init Container: setup-data-dir (permissions)
  - Main Container: Cortex Core
    - Coordination Daemon (HTTP:9500, WS:9501)
    - Worker Pool Manager (20 workers/pod)
    - Intelligent Scheduler
    - Dashboard (port 3004)

Total Capacity:
  - 3 pods × 20 workers = 60 concurrent workers
  - Horizontal Pod Autoscaling ready
  - Cross-node distribution via anti-affinity

Pros and Cons: The Hybrid Deployment Model

Advantages

1. Environment Flexibility

  • Desktop instance provides interactive development/debugging
  • K3s instance provides production-grade orchestration
  • Can shift workloads based on requirements

2. Built-in Redundancy

  • If k3s cluster goes down, desktop instance continues
  • If desktop sleeps/reboots, k3s instance maintains operations
  • No single point of failure

3. Resource Optimization

  • Desktop uses local resources for light tasks
  • K3s leverages cluster resources for heavy workloads
  • Each environment optimized for its strengths

4. Development Velocity

  • Test changes on desktop instance immediately
  • Deploy to k3s when ready for production
  • Rapid iteration without cluster disruption

5. Collaborative Intelligence

  • Two instances can tackle different aspects of complex tasks
  • Desktop coordinates, k3s executes infrastructure operations
  • “Brother” pattern enables specialized task delegation

Disadvantages

1. Deployment Complexity

  • Managing two different deployment models
  • Container builds require different tooling than local runs
  • Image distribution to k3s nodes is manual

2. State Synchronization Challenges

  • Two instances have separate state stores
  • No automatic state replication (yet)
  • Task coordination requires explicit communication

3. Configuration Drift Risk

  • Desktop and k3s may diverge over time
  • Different environment variables and settings
  • Requires discipline to keep configurations aligned

4. Network Topology Complexity

  • Desktop on local network, k3s in VLAN-isolated Proxmox
  • SSH tunneling required for direct communication
  • Firewall rules needed for inter-instance messaging

5. Resource Overhead

  • Running two full Cortex instances consumes more resources
  • 60 workers in k3s + local workers = significant compute

Opportunities: The Distributed Future

1. Infinite Horizontal Scaling

With Cortex running in k3s, we can scale dynamically:

# Scale to 10 replicas = 200 workers
kubectl scale deployment cortex -n cortex --replicas=10

# Scale to 50 replicas = 1000 workers
kubectl scale deployment cortex -n cortex --replicas=50

The cortex-resource-manager (currently being configured) will enable:

  • Automatic HPA: Scale pods based on CPU/memory/queue depth
  • Node Auto-Provisioning: Create new Proxmox VMs when cluster capacity is reached
  • Burst Scaling: Spin up workers in seconds, tear down when idle
  • Cost Optimization: Only run infrastructure when needed

2. Multi-Cloud & Hybrid Deployments

The brother pattern isn’t limited to desktop + k3s. Future deployments could include:

Cortex Instance Map:
├── Desktop (macOS) - Development & Coordination
├── Homelab (Proxmox k3s) - Heavy Compute
├── AWS EKS - Cloud Burst Capacity
├── Azure AKS - Geographic Redundancy
├── Edge Device (Raspberry Pi) - Local Task Processing
└── CI/CD Runner - Automated Testing

Each instance maintains its specialty while contributing to the collective intelligence.

3. Infrastructure as Code via Cortex

With Cortex managing Proxmox, we can:

  • Declarative Infrastructure: “I need 5 more k3s worker nodes”
  • Self-Healing Clusters: Detect failed nodes and provision replacements
  • Dynamic Workload Placement: Move tasks to optimal infrastructure
  • Cost-Aware Scheduling: Prefer on-prem over cloud when capacity allows

Example workflow:

User: "Process 1000 video transcoding tasks"
Desktop Cortex: Analyzes requirements (CPU-heavy, parallelizable)
K3S Cortex: Checks current capacity (only 60 workers available)
Resource Manager: Provisions 10 new Proxmox VMs, adds to k3s cluster
K3S Cortex: Scales to 50 replicas across expanded cluster
Task Completion: Videos processed in parallel
Resource Manager: Decommissions extra VMs after idle period

4. Remote Management & Monitoring

The distributed architecture enables powerful management capabilities:

Remote Access:

  • Manage k3s Cortex from desktop instance via SSH tunneling
  • Web dashboard accessible from anywhere (with proper security)
  • CLI access to any instance from any other instance

Unified Monitoring:

  • Aggregate metrics from all instances
  • Cross-instance task tracking
  • Distributed tracing across environments

Centralized Control:

# From desktop, manage k3s instance
cortex remote exec k3s-cortex "scale workers 100"

# Check status across all instances
cortex cluster status --all-instances

# Migrate task from desktop to k3s
cortex task migrate task-123 --to k3s-cortex

5. Collaborative Task Execution

The real magic happens when instances work together:

Task Delegation:

  • Desktop instance receives complex task
  • Analyzes requirements: “Need 20 parallel workers”
  • Delegates to k3s instance with more capacity
  • Receives results and presents to user

Specialized Workloads:

  • GPU tasks → Route to AWS instance with GPU nodes
  • Long-running jobs → Route to homelab k3s
  • Quick iterations → Keep on desktop
  • Compliance-sensitive → Route to on-prem only

Collaborative Problem Solving:

Complex Task: "Analyze codebase, find security issues, generate fixes"

Desktop Cortex:
  - Coordinates overall strategy
  - Handles user interaction
  - Synthesizes final report

K3S Cortex:
  - Spawns 50 workers to analyze files in parallel
  - Runs security scanners
  - Generates fix candidates

AWS Cortex (if needed):
  - Provides additional burst capacity
  - Runs ML models for pattern detection

6. Fault Tolerance & High Availability

Multiple instances provide inherent resilience:

  • Instance Failure: Tasks automatically rerouted to healthy instances
  • Infrastructure Failure: Proxmox host down? Fail over to cloud instance
  • Maintenance Windows: Drain k3s instance, route to desktop temporarily
  • Geographic Distribution: Instances in different regions for true HA

7. Development & Production Parity

The hybrid model enables true dev/prod parity:

Development Cycle:
1. Develop on desktop instance (live code reloading)
2. Test locally with small worker pool
3. Deploy to k3s for integration testing
4. Scale k3s instance for load testing
5. Promote to "production" k3s namespace
6. Desktop instance monitors and manages production

No more “works on my machine” issues—if it works on desktop Cortex, it works on k3s Cortex.

Technical Innovations

Brother-to-Brother Communication Protocol

We developed an ad-hoc protocol for inter-instance communication:

// Desktop instance initiating brother collaboration
const brother = await cortex.connect('ssh://k3s@10.88.145.190');

// Execute command on brother instance
const result = await brother.exec('kubectl get pods -n cortex');

// Delegate task to brother
await brother.delegateTask({
  type: 'infrastructure',
  action: 'scale',
  params: { replicas: 10 }
});

// Brother reports back
brother.on('task_complete', (result) => {
  console.log('Brother completed task:', result);
});

This enables true peer-to-peer collaboration between instances.

Image Distribution Pipeline

We built a custom pipeline for distributing container images across k3s nodes:

# On master node
buildah build -t cortex:v3 .
buildah push cortex:v3 docker-archive:/tmp/cortex-v3.tar

# Distribute to workers
for worker in worker01 worker02; do
  scp /tmp/cortex-v3.tar k3s@$worker:/tmp/
  ssh k3s@$worker "k3s ctr images import /tmp/cortex-v3.tar"
done

# Tag as latest across cluster
for node in master worker01 worker02; do
  ssh k3s@$node "k3s ctr images tag cortex:v3 cortex:latest"
done

This manual pipeline will evolve into an automated CI/CD process.

Real-World Use Cases

Use Case 1: Repository Portfolio Management

Managing 100+ repositories across GitHub, GitLab, and internal servers:

  • Desktop instance provides interactive interface
  • K3s instance spawns 100 workers (one per repository)
  • Each worker: clones, analyzes dependencies, checks vulnerabilities
  • Results aggregated and presented in unified dashboard
  • High-priority CVEs trigger automated fix workflows

Use Case 2: Multi-Cloud Cost Optimization

Analyzing cloud spending across AWS, Azure, and GCP:

  • Desktop instance coordinates analysis strategy
  • K3s instance processes billing data in parallel
  • Each cloud provider handled by separate worker pool
  • Resource manager provisions additional capacity as needed
  • Results: Identify $50K/month in savings opportunities

Use Case 3: CI/CD at Scale

Running tests for monorepo with 500+ packages:

  • Commit triggers webhook to desktop instance
  • Desktop delegates to k3s instance
  • K3s scales to 50 pods (1000 workers)
  • Each package tested in isolation
  • Results aggregated in <5 minutes
  • Pass/fail reported back to developer

Lessons Learned

1. Container Builds Require Patience

Multiple rebuild cycles taught us to verify entrypoints before distributing. The 15-minute round-trip (build → distribute → test → rebuild) meant getting it right was critical.

2. K3s Containerd != Docker

The k3s containerd image store requires different workflows than Docker. Understanding ctr commands and image import processes was essential.

3. Pod Scheduling is Opinionated

Even with pod anti-affinity, k8s made its own decisions about pod placement. All 3 pods landed on worker02 initially. Learning to work with (not against) the scheduler was key.

4. Brother Pattern is Powerful

The collaborative deployment pattern felt natural and effective. Having one instance coordinate while another executes created a powerful division of labor.

5. Proxmox + K3s + Cortex = Infrastructure Nirvana

The combination enables true infrastructure-as-code. Cortex can provision VMs, configure k3s, deploy workloads, and manage the full stack programmatically.

Future Roadmap

Phase 1: Resource Manager (In Progress)

  • Complete cortex-resource-manager configuration
  • Enable HPA for automatic pod scaling
  • Implement Proxmox API integration for VM provisioning

Phase 2: Multi-Instance Coordination

  • Formalize brother protocol with API
  • Enable task migration between instances
  • Implement distributed state synchronization

Phase 3: Geographic Distribution

  • Deploy instances in multiple regions
  • Implement intelligent request routing
  • Build disaster recovery capabilities

Phase 4: AI-Driven Infrastructure

  • ML-based capacity planning
  • Predictive scaling based on historical patterns
  • Autonomous infrastructure optimization

Phase 5: Federation

  • Allow multiple organizations to run connected Cortex instances
  • Secure task delegation across organizational boundaries
  • Build marketplace for specialized Cortex capabilities

Conclusion

Deploying Cortex across desktop and k3s environments using the brother-assisted pattern proved the viability of distributed AI orchestration. The challenges we faced—image builds, distribution, configuration—were real but surmountable. The benefits—flexibility, redundancy, specialized execution—far outweigh the complexity.

Most importantly, we’ve unlocked a new paradigm: Cortex instances that collaborate like colleagues, not just run in parallel like replicas. The desktop instance brings interactivity and development agility. The k3s instance brings scale and production reliability. Together, they form something greater than the sum of their parts.

The future is distributed, collaborative, and intelligent. And it’s running on our infrastructure right now—60 workers strong and ready to scale to 10,000+ with a single command.


Technical Stats:

  • Deployment Time: ~60 minutes (5-minute challenge accepted and met!)
  • Final Configuration: 3 pods, 60 workers, 3 k3s nodes
  • Container Image: 690MB (Node.js 20 Alpine base)
  • Scalability: 1 to 10,000+ workers on-demand
  • Environments: macOS desktop + Proxmox k3s cluster
  • Communication: SSH tunneling + Kubernetes API

Key Technologies:

  • Cortex 2.0 (Multi-Agent AI Orchestration)
  • K3s v1.33.6 (Lightweight Kubernetes)
  • Proxmox VE (Virtualization Platform)
  • Buildah (Container Image Builder)
  • Containerd (Container Runtime)
  • Longhorn (Distributed Storage)
  • MetalLB (Load Balancer)

The Brother Pattern: Two instances, one mission: revolutionizing how AI agents deploy, scale, and manage modern infrastructure.


Want to learn more about Cortex and the brother deployment pattern? Check out our GitHub repository or join the discussion in our community Discord.

#Cortex #Kubernetes #k3s #Distributed Systems #Infrastructure #DevOps #Containers #Deployment #Multi-Agent #Proxmox