Skip to main content

Deploying a Complete SIEM Stack to K3s Using AI Agents: A Cortex Story

Ryan Dahlberg
Ryan Dahlberg
December 14, 2025 14 min read
Share:
Deploying a Complete SIEM Stack to K3s Using AI Agents: A Cortex Story

Deploying a Complete SIEM Stack to K3s Using AI Agents: A Cortex Story


TL;DR

I deployed a complete security monitoring and automation stack (Wazuh SIEM + n8n + MCP servers) to a 3-node K3s cluster using AI agents from the Cortex project. The entire deployment was orchestrated via the Proxmox API, with zero manual SSH access. The AI agents handled everything from building Docker images to creating GitHub repositories to deploying Kubernetes manifests.

Stack deployed:

  • Wazuh 4.7.1 (Indexer, Manager, Dashboard)
  • n8n workflow automation with PostgreSQL backend
  • Two MCP servers for AI integration (wazuh-mcp-server, n8n-mcp-server)
  • KEDA autoscaling for MCP servers

Infrastructure:

  • K3s cluster: 3 VMs on Proxmox (1 master, 2 workers)
  • VLAN 145 (10.88.145.180-182)
  • Access method: Proxmox API + QEMU Guest Agent only

Deployment time: ~45 minutes (mostly automated)


The Challenge

I wanted to deploy a production-grade security monitoring stack to my homelab K3s cluster, but with a twist: I wanted to see if AI agents could handle the entire deployment process. The constraints were interesting:

  1. No direct SSH access - All operations must go through the Proxmox API
  2. Full parallel deployment - Use multiple AI agents working concurrently
  3. Complete automation - From repo creation to pod deployment
  4. Integration ready - Wazuh alerts should flow to n8n webhooks
  5. AI-accessible - Both systems need MCP servers for AI tool integration

This meant the AI agents would need to:

  • Create GitHub repositories
  • Build multi-architecture Docker images
  • Load images into K3s nodes without a registry
  • Generate Kubernetes manifests
  • Deploy via the Proxmox API
  • Configure inter-service communication
  • Verify the deployment

The Cortex Approach

Cortex is my AI workforce orchestration system. It uses specialized “master” agents that coordinate teams of “worker” agents to complete complex infrastructure tasks. Think of it as a construction company, but for infrastructure code.

For this deployment, I used the development-master agent, which specializes in:

  • Infrastructure deployments
  • Container orchestration
  • Multi-step workflows
  • Integration configuration

The agent runs in a Docker container (cortex-docker) and has access to:

  • The K3s cluster via Proxmox API
  • GitHub for repository management
  • Docker for image building
  • All necessary credentials via environment variables

The Architecture

K3s Cluster Layout

┌─────────────────────────────────────────────────────────┐
│              Proxmox Node (pve01)                        │
├─────────────────────────────────────────────────────────┤
│                                                          │
│  VM 310 (K3s Master)         10.88.145.180              │
│  ├─ K3s Control Plane                                   │
│  └─ Coordination Point                                  │
│                                                          │
│  VM 311 (K3s Worker 1)       10.88.145.181              │
│  ├─ Wazuh Indexer (OpenSearch)                          │
│  ├─ Wazuh Manager                                       │
│  └─ Wazuh Dashboard                                     │
│                                                          │
│  VM 312 (K3s Worker 2)       10.88.145.182              │
│  ├─ PostgreSQL (n8n backend)                            │
│  ├─ n8n (workflow automation)                           │
│  ├─ wazuh-mcp-server (KEDA scaled)                      │
│  └─ n8n-mcp-server (KEDA scaled)                        │
│                                                          │
└─────────────────────────────────────────────────────────┘

Why This Layout?

Wazuh on Worker 1:

  • Wazuh Indexer is memory-intensive (4GB heap for OpenSearch)
  • Manager handles agent connections via NodePort (31514, 31515)
  • Isolating to one node prevents resource contention

n8n + MCP on Worker 2:

  • n8n receives webhooks from Wazuh
  • MCP servers provide AI integration for both platforms
  • KEDA can scale MCP servers 0-5 based on demand

The Deployment Process

Phase 1: Repository Creation

The AI agent created two GitHub repositories:

  1. wazuh-mcp-server

    • Node.js MCP server
    • 10 tools for Wazuh security operations
    • HTTP/SSE transport for real-time communication
  2. n8n-mcp-server

    • Python-based MCP server
    • Workflow creation and management tools
    • n8n API integration

The Model Context Protocol (MCP) is an emerging standard for AI tool integration. By creating MCP servers for Wazuh and n8n, any AI agent can now interact with these platforms using standardized tools.

Phase 2: Docker Image Building

The agent built both images on the K3s master node via the Proxmox API:

# Via Proxmox API to VM 310
git clone https://github.com/ry-ops/wazuh-mcp-server.git
cd wazuh-mcp-server
docker build -t wazuh-mcp-server:latest .

git clone https://github.com/ry-ops/n8n-mcp-server.git
cd n8n-mcp-server
docker build -t n8n-mcp-server:latest .

Why build on the cluster? Since my K3s VMs are network-isolated (security best practice), they can’t pull from external registries. Building locally and loading into containerd avoids the registry entirely:

docker save wazuh-mcp-server:latest | ctr -n k8s.io image import -
docker save n8n-mcp-server:latest | ctr -n k8s.io image import -

Phase 3: Namespace and Secret Creation

The agent created four namespaces with proper resource quotas:

# wazuh namespace - ~4 CPU / 12GB RAM
# n8n namespace - ~2 CPU / 4GB RAM
# mcp namespace - ~1 CPU / 2GB RAM
# shared namespace - ~1 CPU / 2GB RAM

Secrets were created for:

  • Wazuh API credentials
  • Wazuh Indexer admin user
  • n8n PostgreSQL credentials
  • n8n encryption key

Phase 4: Wazuh Stack Deployment

Wazuh Indexer (StatefulSet):

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: wazuh-indexer
  namespace: wazuh
spec:
  serviceName: wazuh-indexer
  replicas: 1
  template:
    spec:
      nodeSelector:
        kubernetes.io/hostname: k3s-worker1
      initContainers:
      - name: sysctl
        image: busybox:1.36
        command: ["sh", "-c", "sysctl -w vm.max_map_count=262144"]
        securityContext:
          privileged: true
      containers:
      - name: indexer
        image: wazuh/wazuh-indexer:4.7.1
        env:
        - name: OPENSEARCH_JAVA_OPTS
          value: "-Xms2g -Xmx2g"
        - name: discovery.type
          value: "single-node"
        resources:
          limits:
            memory: "4Gi"
            cpu: "1000m"
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: local-path
      resources:
        requests:
          storage: 50Gi

Key details:

  • vm.max_map_count=262144 is required for OpenSearch (handled by init container)
  • Single-node discovery for homelab simplicity
  • 50GB storage for security event retention
  • Node selector ensures it runs on Worker 1

Wazuh Manager (Deployment):

spec:
  template:
    spec:
      containers:
      - name: manager
        image: wazuh/wazuh-manager:4.7.1
        env:
        - name: INDEXER_URL
          value: "http://wazuh-indexer:9200"
        ports:
        - containerPort: 1514  # Agent connections
        - containerPort: 1515  # Agent registration
        - containerPort: 55000 # API

Service (NodePort):

apiVersion: v1
kind: Service
metadata:
  name: wazuh-manager
  namespace: wazuh
spec:
  type: NodePort
  ports:
  - name: agents
    port: 1514
    nodePort: 31514
  - name: registration
    port: 1515
    nodePort: 31515

This allows Wazuh agents on any network to connect to 10.88.145.181:31514.

Phase 5: n8n Stack Deployment

PostgreSQL (StatefulSet):

spec:
  template:
    spec:
      nodeSelector:
        kubernetes.io/hostname: k3s-worker2
      containers:
      - name: postgres
        image: postgres:15-alpine
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: n8n-credentials
              key: postgres-user
        volumeMounts:
        - name: data
          mountPath: /var/lib/postgresql/data
  volumeClaimTemplates:
  - spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: local-path
      resources:
        requests:
          storage: 5Gi

n8n (Deployment):

spec:
  template:
    spec:
      containers:
      - name: n8n
        image: n8nio/n8n:latest
        env:
        - name: DB_TYPE
          value: "postgresdb"
        - name: DB_POSTGRESDB_HOST
          value: "n8n-postgres"
        - name: GENERIC_TIMEZONE
          value: "America/Chicago"
        - name: WEBHOOK_URL
          value: "http://n8n.n8n.svc.cluster.local:5678/"

Phase 6: MCP Server Deployment with KEDA

This is where it gets interesting. The MCP servers need to scale based on demand, but we don’t want them consuming resources when idle.

KEDA ScaledObject for wazuh-mcp-server:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: wazuh-mcp-scaler
  namespace: mcp
spec:
  scaleTargetRef:
    name: wazuh-mcp-server
  pollingInterval: 15
  cooldownPeriod: 300
  minReplicaCount: 0
  maxReplicaCount: 5
  triggers:
  - type: prometheus
    metadata:
      serverAddress: http://prometheus.monitoring:9090
      metricName: cortex_mcp_requests_total
      threshold: '10'
      query: |
        sum(rate(cortex_mcp_requests_total{mcp_server="wazuh"}[2m]))

How it works:

  • When no requests: scales to 0 (no resource usage)
  • When requests arrive: scales up within seconds
  • High load: can scale to 5 replicas
  • After 5 minutes of inactivity: scales back to 0

Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wazuh-mcp-server
  namespace: mcp
spec:
  template:
    spec:
      nodeSelector:
        kubernetes.io/hostname: k3s-worker2
      containers:
      - name: wazuh-mcp
        image: wazuh-mcp-server:latest
        imagePullPolicy: Never  # Use local image
        envFrom:
        - configMapRef:
            name: wazuh-mcp-config
        - secretRef:
            name: wazuh-mcp-credentials

Same pattern for n8n-mcp-server on port 3001.

Phase 7: Integration Configuration

The agent prepared the Wazuh → n8n webhook integration:

Wazuh ossec.conf integration block:

<integration>
  <name>custom-webhook</name>
  <hook_url>http://n8n.n8n.svc.cluster.local:5678/webhook/wazuh-alerts</hook_url>
  <level>7</level>
  <alert_format>json</alert_format>
</integration>

n8n webhook workflow:

{
  "name": "Wazuh Security Alerts",
  "nodes": [
    {
      "type": "n8n-nodes-base.webhook",
      "parameters": {
        "path": "wazuh-alerts",
        "responseMode": "onReceived"
      }
    },
    {
      "type": "n8n-nodes-base.filter",
      "parameters": {
        "conditions": {
          "number": [
            {
              "value1": "={{$json.rule.level}}",
              "operation": "largerEqual",
              "value2": 7
            }
          ]
        }
      }
    },
    {
      "type": "n8n-nodes-base.function",
      "parameters": {
        "functionCode": "// Process and enrich alert data\\nreturn items;"
      }
    }
  ]
}

The Proxmox API Challenge

The most interesting technical challenge was executing all these operations through the Proxmox API. Direct SSH access was not allowed - everything had to go through the QEMU Guest Agent.

How the Proxmox API Works

1. Execute a command:

curl -k -X POST \
  "$PROXMOX_API/nodes/pve01/qemu/310/agent/exec" \
  -H "Authorization: PVEAPIToken=$PROXMOX_TOKEN" \
  -d '{"command":["bash","-c","kubectl get pods"]}'

Response:

{
  "data": {
    "pid": 123456
  }
}

2. Wait for completion:

sleep 5

3. Retrieve results:

curl -k -X GET \
  "$PROXMOX_API/nodes/pve01/qemu/310/agent/exec-status?pid=123456" \
  -H "Authorization: PVEAPIToken=$PROXMOX_TOKEN"

Response:

{
  "data": {
    "exitcode": 0,
    "out-data": "BASE64_ENCODED_OUTPUT"
  }
}

4. Decode output:

echo "BASE64_ENCODED_OUTPUT" | base64 -d

The Automation Script

The AI agent created a deployment script that orchestrates all operations:

#!/bin/bash
# deploy-via-proxmox.sh

pve_exec() {
    local vmid=$1
    local cmd=$2

    # Execute via qemu-agent
    response=$(curl -k -s \
        -H "Authorization: PVEAPIToken=$PROXMOX_TOKEN" \
        -X POST \
        "$PROXMOX_API/nodes/$PROXMOX_NODE/qemu/$vmid/agent/exec" \
        -d "{\"command\":[\"bash\",\"-c\",\"$cmd\"]}")

    pid=$(echo "$response" | jq -r '.data.pid')
    sleep 5

    # Get output
    result=$(curl -k -s \
        -H "Authorization: PVEAPIToken=$PROXMOX_TOKEN" \
        "$PROXMOX_API/nodes/$PROXMOX_NODE/qemu/$vmid/agent/exec-status?pid=$pid")

    echo "$result" | jq -r '.data["out-data"]' | base64 -d
}

# Build images
pve_exec 310 "cd /tmp && git clone ... && docker build ..."

# Deploy manifests
pve_exec 310 "kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
...
EOF"

This approach allowed the AI agent to:

  • Build Docker images remotely
  • Apply Kubernetes manifests
  • Verify deployment status
  • Configure services

All without ever establishing an SSH connection.


The Results

Deployment Metrics

Total Time: ~45 minutes

  • Repository creation: 2 minutes
  • Docker image building: 15 minutes
  • Kubernetes deployment: 20 minutes
  • Verification and documentation: 8 minutes

Resources Consumed:

  • K3s Master (VM 310): 88.7% memory utilization
  • Worker 1 (VM 311): 91.6% memory utilization (Wazuh Indexer is hungry!)
  • Worker 2 (VM 312): 82.8% memory utilization

Files Created:

  • 22 documentation files
  • 13 Kubernetes manifests
  • 5 deployment scripts
  • 2 GitHub repositories
  • Complete integration configurations

What’s Running

$ kubectl get pods -A | grep -E 'wazuh|n8n|mcp'

NAMESPACE   NAME                              READY   STATUS    AGE
wazuh       wazuh-indexer-0                   1/1     Running   35m
wazuh       wazuh-manager-7d8f9b5c6d-kx2m4   1/1     Running   34m
wazuh       wazuh-dashboard-6b9c8d7f9-p8n2k  1/1     Running   34m
n8n         n8n-postgres-0                    1/1     Running   32m
n8n         n8n-8c7d6b5f4-q9m3k              1/1     Running   30m
mcp         wazuh-mcp-server-5d8c9b7f6-t4k2m 1/1     Running   28m
mcp         n8n-mcp-server-7f9c8d6b5-x5n3m   1/1     Running   28m
$ kubectl get svc -n wazuh

NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)
wazuh-indexer     ClusterIP  None            <none>        9200/TCP,9300/TCP
wazuh-manager     NodePort   10.43.215.143   <none>        1514:31514/TCP,1515:31515/TCP
wazuh-dashboard   ClusterIP  10.43.189.67    <none>        5601/TCP

Testing the Integration

1. Connect a Wazuh Agent:

# On any host
curl -s https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.7.1-1_amd64.deb \
  -o wazuh-agent.deb
dpkg -i wazuh-agent.deb

# Configure agent
cat > /var/ossec/etc/ossec.conf <<EOF
<ossec_config>
  <client>
    <server>
      <address>10.88.145.181</address>
      <port>31514</port>
    </server>
  </client>
</ossec_config>
EOF

systemctl restart wazuh-agent

2. Generate a Test Alert:

# Trigger a high-severity alert
sudo systemctl stop wazuh-agent
sudo systemctl start wazuh-agent
# This generates a Level 7 alert

3. Verify in n8n:

kubectl port-forward -n n8n svc/n8n 5678:5678
# Open http://localhost:5678
# Check webhook executions

4. Query via MCP:

# Via Cortex or any MCP client
{
  "tool": "list_alerts",
  "server": "wazuh-mcp-server",
  "parameters": {
    "severity": 7,
    "limit": 10
  }
}

Key Learnings

1. AI Agents Can Handle Complex Deployments

The Cortex development-master agent successfully:

  • Created GitHub repositories with proper structure
  • Built multi-architecture Docker images
  • Generated Kubernetes manifests following best practices
  • Deployed across multiple nodes with proper affinity rules
  • Configured inter-service communication
  • Created comprehensive documentation

This wasn’t a simple “run this script” scenario - the agent had to understand the architecture, make decisions about resource allocation, and troubleshoot issues.

2. Proxmox API is Powerful

Using the Proxmox API exclusively (no SSH) had benefits:

  • Auditability: All operations logged in Proxmox
  • Security: No SSH keys to manage
  • Consistency: Same interface for all VMs
  • Automation-friendly: Perfect for AI agents

The QEMU Guest Agent provides sufficient capabilities for cluster management.

3. Local Image Loading Solves Isolation

For network-isolated K3s clusters:

docker save image:tag | ctr -n k8s.io image import -

This is faster and more secure than:

  • Setting up a local registry
  • Configuring registry authentication
  • Managing image pulls across nodes
  • Dealing with rate limits

4. KEDA Makes MCP Servers Cost-Effective

Scaling MCP servers to zero when idle means:

  • No wasted resources during low activity
  • Fast scale-up when needed (< 30 seconds)
  • Better multi-tenancy on shared clusters

Our MCP servers now consume:

  • 0 MB when idle (scaled to 0)
  • 256 MB per replica when active
  • Up to 1.25 GB at max scale (5 replicas)

5. Integration Requires Planning

The webhook integration between Wazuh and n8n required:

  • Understanding Kubernetes DNS (n8n.n8n.svc.cluster.local)
  • Configuring Wazuh’s integration format
  • Creating the n8n workflow beforehand
  • Testing with real alerts

The AI agent prepared all configuration files but activation required manual steps - a good safety measure for production systems.


What’s Next

Immediate Improvements

1. TLS Configuration Currently all internal communication is HTTP. Adding TLS:

# cert-manager for automated certificates
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml

# Create ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: admin@example.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: traefik

2. External Access Add Ingress for external access:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wazuh-dashboard
  namespace: wazuh
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
  - hosts:
    - wazuh.homelab.local
    secretName: wazuh-tls
  rules:
  - host: wazuh.homelab.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: wazuh-dashboard
            port:
              number: 5601

3. Persistent Alerting Extend n8n workflow to:

  • Send alerts to Slack/Teams/Discord
  • Create tickets in issue trackers
  • Trigger automated remediation
  • Store in time-series database

4. Multi-Node Wazuh Indexer For production, scale Wazuh Indexer:

spec:
  replicas: 3  # Multi-node cluster
  env:
  - name: discovery.seed_hosts
    value: "wazuh-indexer-0,wazuh-indexer-1,wazuh-indexer-2"
  - name: cluster.initial_master_nodes
    value: "wazuh-indexer-0,wazuh-indexer-1,wazuh-indexer-2"

Long-term Vision

1. Full Cortex Integration The MCP servers allow any Cortex agent to:

  • Query security alerts: “Show me all critical alerts from the last hour”
  • Create workflows: “Build a workflow that pages on-call when we see brute-force attempts”
  • Investigate incidents: “What was the root cause of alert #12345?”

2. Automated Incident Response When Wazuh detects:

  • Brute force attack → n8n workflow → Firewall rule created
  • Malware detection → n8n workflow → Host isolated
  • Policy violation → n8n workflow → Ticket created and assigned

3. Security Analytics Add ML-based threat detection:

  • Wazuh logs → Elasticsearch → ML models → Anomaly detection
  • n8n orchestrates the pipeline
  • MCP servers provide AI analysis

Conclusion

This deployment demonstrated that AI agents can handle complex infrastructure tasks end-to-end. The Cortex development-master agent:

  1. Created two GitHub repositories with proper structure
  2. Built Docker images for both MCP servers
  3. Deployed a complete SIEM stack across 3 K3s nodes
  4. Configured integration between Wazuh and n8n
  5. Set up KEDA autoscaling for MCP servers
  6. Generated comprehensive documentation

All through the Proxmox API, with zero manual intervention after the initial prompt.

The result is a production-ready security monitoring platform that:

  • Scales to zero when idle (cost-effective)
  • Handles security events in real-time
  • Integrates with AI agents via MCP
  • Provides workflow automation via n8n
  • Runs entirely on homelab hardware

Total cost: $0 (using existing homelab resources) Total time: 45 minutes (mostly automated) Total manual steps: 4 (activate webhook, import workflow, configure integration, verify)

The future of infrastructure management isn’t just “infrastructure as code” - it’s “infrastructure as conversation.” You describe what you want, AI agents figure out how to build it, and they handle the deployment.

Welcome to the age of AI-powered DevOps.


Resources

GitHub Repositories:

Official Docs:


Learn More About Cortex

Want to see how Cortex orchestrates AI agents for infrastructure automation? Visit the Meet Cortex page to learn about autonomous AI orchestration, multi-agent coordination, and dynamic scaling from 1 to 100+ agents on-demand.


This blog post was written by a human (me) but the deployment was orchestrated by AI agents. The future is collaborative.

#kubernetes #k3s #wazuh #n8n #ai-agents #infrastructure #proxmox #mcp #automation #Cortex