Skip to main content

New MCP Server Ecosystem Integrations

Ryan Dahlberg
Ryan Dahlberg
October 20, 2025 13 min read
Share:
New MCP Server Ecosystem Integrations

New MCP Server Ecosystem Integrations

The Model Context Protocol (MCP) provides a standardized way for AI systems to interact with external tools and data sources. Today, we’re announcing a major expansion of Cortex’s MCP server ecosystem, with 25 new integrations covering development tools, databases, cloud services, and business applications.

These integrations transform Cortex from an isolated orchestration system into a connected platform that spans your entire development and operations environment. AI agents can now read from databases, trigger deployments, update project management tools, and interact with APIs - all through a consistent, type-safe interface.

What is Model Context Protocol?

Before diving into specific integrations, it’s worth explaining what MCP is and why it matters.

Traditional AI integrations are bespoke. Every tool requires custom code to handle authentication, rate limiting, error handling, and data formatting. This approach doesn’t scale - each new integration multiplies complexity.

MCP provides a standard protocol for AI-tool communication. Tools expose capabilities through MCP servers, and AI systems consume those capabilities through MCP clients. The protocol handles:

  • Resource discovery: What can this tool do?
  • Schema validation: What inputs does each operation require?
  • Authentication: How does the AI authenticate with the tool?
  • Error handling: What happens when operations fail?
  • Rate limiting: How do we respect API quotas?

With MCP, adding a new integration means implementing a server that speaks the protocol. Once implemented, any MCP-compatible AI system can use it without custom code.

Cortex has supported MCP since launch, but our ecosystem was limited. This release dramatically expands what’s possible.

Development Tool Integrations

AI agents are increasingly involved in software development workflows. These integrations enable agents to interact with development tools directly.

GitHub Integration

The GitHub MCP server provides comprehensive access to repository operations:

  • Repository management: Create repos, manage branches, configure settings
  • Issues and pull requests: Open issues, review PRs, merge code
  • Actions and workflows: Trigger workflows, monitor runs, analyze results
  • Code search: Find code patterns across repositories
  • Security scanning: Access vulnerability reports and dependency alerts

Example workflow: An agent monitoring production errors can search the codebase for relevant code, open an issue with context and proposed fix, create a branch, commit changes, and open a PR - all autonomously.

GitLab Integration

For teams using GitLab, we provide equivalent functionality:

  • Full API coverage including projects, merge requests, and pipelines
  • CI/CD integration for triggering and monitoring jobs
  • Issue tracking with custom field support
  • Code quality and security reports
  • Container registry access

Jira Integration

Project management integration enables agents to participate in planning and tracking:

  • Issue management: Create, update, search, and transition issues
  • Sprint planning: Add issues to sprints, track velocity, analyze trends
  • Custom fields: Read and write custom field values
  • Notifications: Monitor issue changes and trigger on events
  • Reporting: Generate burndown charts and other metrics

An agent can monitor system health, detect anomalies, open Jira tickets automatically, and assign them to the appropriate team based on the issue type.

Linear Integration

For teams using Linear, we provide a streamlined integration:

  • Issue and project management
  • Cycle planning and tracking
  • Team and user management
  • Webhook integration for real-time updates
  • GraphQL API access for complex queries

Database Integrations

AI agents often need to query and analyze data. These integrations provide safe, controlled database access.

PostgreSQL Integration

The Postgres MCP server enables SQL query execution with built-in safety features:

  • Read-only mode: Restrict agents to SELECT queries
  • Query validation: Block dangerous operations like DELETE without WHERE
  • Row limits: Automatically limit result set sizes
  • Connection pooling: Efficient resource usage across multiple agents
  • Schema introspection: Agents can understand table structure

Security consideration: Agents run with limited database users that can only access specific schemas. Sensitive tables remain inaccessible even if an agent misbehaves.

MongoDB Integration

For document databases, the MongoDB integration provides:

  • Query execution with aggregation pipeline support
  • Document CRUD operations
  • Index management and query optimization analysis
  • Collection schema inference
  • Change stream monitoring for real-time updates

Redis Integration

The Redis integration supports common caching and pub/sub patterns:

  • Get/set operations with TTL support
  • List, set, and sorted set operations
  • Pub/sub for event-driven workflows
  • Stream processing for event logs
  • Lua script execution for atomic operations

Elasticsearch Integration

Full-text search and analytics capabilities:

  • Index management and document operations
  • Complex search queries with aggregations
  • Bulk operations for efficiency
  • Index templates and mappings
  • Performance analysis and optimization suggestions

Cloud Platform Integrations

Modern applications run on cloud infrastructure. These integrations enable agents to manage cloud resources.

AWS Integration

Comprehensive AWS service coverage:

  • Compute: EC2, Lambda, ECS, EKS management
  • Storage: S3 operations, EBS management, backup coordination
  • Database: RDS, DynamoDB, ElastiCache administration
  • Networking: VPC, Load Balancer, Route 53 configuration
  • Security: IAM, Secrets Manager, KMS integration
  • Monitoring: CloudWatch metrics, logs, and alarms

Agents can scale infrastructure based on load, respond to alarms, manage deployments, and optimize costs.

Google Cloud Integration

Equivalent coverage for Google Cloud Platform:

  • Compute Engine, Cloud Functions, GKE
  • Cloud Storage, Filestore, Persistent Disks
  • Cloud SQL, Firestore, Memorystore
  • VPC, Load Balancing, Cloud DNS
  • Cloud IAM, Secret Manager, Cloud KMS
  • Cloud Monitoring and Logging

Azure Integration

Complete Azure service integration:

  • Virtual Machines, Azure Functions, AKS
  • Blob Storage, File Storage, Managed Disks
  • Azure SQL, Cosmos DB, Cache for Redis
  • Virtual Networks, Load Balancer, Traffic Manager
  • Azure AD, Key Vault, Azure Security
  • Azure Monitor and Application Insights

Communication Platform Integrations

AI agents often need to communicate with humans. These integrations enable natural interaction through familiar channels.

Slack Integration

Deep Slack integration for team communication:

  • Messaging: Send messages, create threads, add reactions
  • Channels: Create channels, manage members, set topics
  • Users and groups: Look up users, manage user groups
  • Interactive components: Buttons, menus, modals for rich interactions
  • Webhooks: Respond to events in real-time
  • File sharing: Upload and manage files

An agent can post deployment notifications, respond to questions via slash commands, and escalate issues by messaging on-call engineers.

Discord Integration

For communities using Discord:

  • Server and channel management
  • Message sending with embed support
  • Role and permission management
  • Webhook integration
  • Voice channel status updates

Microsoft Teams Integration

Enterprise communication integration:

  • Team and channel management
  • Adaptive cards for rich formatting
  • Meeting scheduling and management
  • File collaboration integration
  • Bot framework support

Email Integration

SMTP and API-based email capabilities:

  • Send transactional emails via SendGrid, Mailgun, or Amazon SES
  • Template support with variable substitution
  • Delivery tracking and analytics
  • Bounce and complaint handling
  • Bulk email operations

Business Application Integrations

AI agents can streamline business processes through these application integrations.

Salesforce Integration

CRM integration for sales and customer success workflows:

  • Lead, opportunity, and account management
  • Custom object support
  • Salesforce Reports and Dashboards
  • Workflow and approval process integration
  • Real-time change tracking via platform events

An agent monitoring product usage can automatically create opportunities when accounts reach expansion thresholds.

HubSpot Integration

Marketing and sales automation:

  • Contact and company management
  • Deal pipeline tracking
  • Email campaign management
  • Form submission handling
  • Analytics and reporting

Stripe Integration

Payment processing integration:

  • Customer and subscription management
  • Invoice creation and payment tracking
  • Refund processing
  • Webhook event handling
  • Financial reporting and analytics

Shopify Integration

E-commerce platform integration:

  • Product and inventory management
  • Order processing and fulfillment
  • Customer management
  • Discount and promotion creation
  • Sales analytics

Analytics and Observability Integrations

Understanding system behavior is critical. These integrations provide observability and analytics capabilities.

Datadog Integration

Application performance monitoring:

  • Metrics publishing
  • Log aggregation
  • Event tracking
  • Monitor management
  • Dashboard creation and querying

Agents can analyze metrics, detect anomalies, create monitors, and trigger alerts based on complex conditions.

Grafana Integration

Visualization and alerting:

  • Dashboard management
  • Data source configuration
  • Alert rule creation
  • Annotation creation for event tracking
  • Query execution across data sources

New Relic Integration

Full-stack observability:

  • APM data querying via NRQL
  • Infrastructure monitoring
  • Browser and mobile monitoring
  • Synthetic monitoring management
  • Alert policy configuration

Security and Compliance Integrations

Security is non-negotiable. These integrations help maintain security posture and compliance.

1Password Integration

Secrets management:

  • Read secrets for agent authentication
  • Rotate credentials automatically
  • Audit secret access
  • Manage vaults and access policies

Agents never store credentials directly - they retrieve them from 1Password when needed.

HashiCorp Vault Integration

Enterprise secrets management:

  • Dynamic secrets generation
  • Lease management
  • Policy administration
  • Audit log access
  • Encryption as a service

Snyk Integration

Security scanning and vulnerability management:

  • Code vulnerability scanning
  • Dependency vulnerability detection
  • Container image scanning
  • Infrastructure as code scanning
  • License compliance checking

Agents can scan code automatically on PR creation, block merges for critical vulnerabilities, and create tickets for remediation.

Configuration and Deployment

MCP server integrations are configured through Cortex’s unified configuration system. Each integration declares required credentials and permissions, and the platform handles secure credential management.

Basic Configuration

mcp_servers:
  github:
    enabled: true
    credentials:
      token: ${GITHUB_TOKEN}
    rate_limit:
      requests_per_minute: 100

  postgres:
    enabled: true
    credentials:
      host: ${DB_HOST}
      database: ${DB_NAME}
      user: ${DB_USER}
      password: ${DB_PASSWORD}
    safety:
      read_only: true
      max_rows: 1000

Agent Access Control

Not all agents should access all integrations. Use role-based access control to limit integration access:

agents:
  deployment_agent:
    permissions:
      mcp_servers:
        - github
        - aws
        - slack

  analytics_agent:
    permissions:
      mcp_servers:
        - postgres
        - elasticsearch
        - grafana

This prevents a compromised agent from accessing sensitive systems.

Security Considerations

Giving AI agents access to production systems requires careful security design. Here’s how we approach it:

Principle of Least Privilege

Each agent runs with minimal permissions. An agent that deploys code doesn’t need database access. An agent that generates reports doesn’t need write access to production data.

Credential Isolation

Credentials are never stored in agent code or configuration. They’re retrieved from secrets management systems at runtime and never logged or persisted.

Audit Logging

Every MCP server operation is logged with:

  • Which agent performed the operation
  • What operation was performed
  • When it occurred
  • Whether it succeeded or failed
  • What data was accessed or modified

These logs support security audits and incident investigation.

Rate Limiting

Each integration has configurable rate limits to prevent abuse. If an agent goes rogue and attempts to spam operations, rate limiting stops it before damage occurs.

Read-Only by Default

Where possible, integrations default to read-only mode. Write access must be explicitly enabled and justified. This reduces risk of accidental or malicious data modification.

Network Segmentation

Production integrations run in isolated network segments with strict firewall rules. Even if an agent is compromised, it can only access explicitly allowed systems.

Performance and Reliability

MCP server integrations are designed for production use with high availability and performance.

Connection Pooling

Database and API connections use pooling to minimize latency and reduce overhead. Connections are shared across agents and managed by the platform.

Caching

Frequently accessed data is cached to reduce API calls and improve response times. Cache invalidation happens automatically based on TTL and event-driven invalidation.

Retry Logic

Transient failures are handled with exponential backoff retry logic. Network hiccups don’t cause agent failures - they’re automatically retried with appropriate delays.

Circuit Breakers

If an integration becomes unhealthy, circuit breakers prevent cascading failures. The integration is marked degraded, agents receive clear errors, and the system continues functioning with reduced capabilities.

Metrics and Monitoring

Every integration exposes metrics:

  • Request rate and latency
  • Error rate by type
  • Cache hit rate
  • Connection pool utilization
  • Rate limit headroom

These metrics feed into Cortex’s observability system for monitoring and alerting.

Developer Experience

Building on MCP integrations is straightforward. The SDK provides high-level abstractions that hide protocol details.

Type-Safe API

All integrations have TypeScript types for requests and responses:

const github = cortex.mcp.github;

// Type-safe issue creation
const issue = await github.createIssue({
  repo: 'cortex',
  owner: 'acme',
  title: 'Bug in routing logic',
  body: 'Detailed description...',
  labels: ['bug', 'priority:high']
});

// Compile-time validation
const pr = await github.createPullRequest({
  repo: 'cortex',
  owner: 'acme',
  title: 'Fix routing bug',
  head: 'fix-routing',
  base: 'main',
  // TypeScript error: missing required field 'title'
});

Error Handling

Integration errors are properly typed and include context:

try {
  await github.mergePullRequest({ owner, repo, pullNumber });
} catch (error) {
  if (error instanceof MCPError) {
    if (error.code === 'RATE_LIMIT_EXCEEDED') {
      // Wait and retry
    } else if (error.code === 'CONFLICT') {
      // PR has conflicts, can't merge
    }
  }
}

Testing Support

Mock servers enable testing without hitting real APIs:

const mockGitHub = cortex.mcp.mock('github');

mockGitHub.stub('createIssue', async (params) => ({
  id: 123,
  number: 456,
  url: 'https://github.com/...'
}));

// Test agent behavior with mock integration
await testAgent(mockGitHub);

Community Contributions

Many of these integrations came from community contributions. We’re grateful to everyone who built MCP servers and shared them with the ecosystem.

Building an MCP server is straightforward. The protocol specification is open, and we provide SDKs in multiple languages. If you need an integration that doesn’t exist yet, you can build it yourself and share it with the community.

What’s Next

This release establishes a strong foundation, but we’re just getting started. Upcoming MCP server integrations include:

  • Figma for design workflow automation
  • Notion for documentation and knowledge management
  • Airtable for flexible data management
  • Kubernetes for container orchestration
  • Terraform for infrastructure as code
  • Jenkins for CI/CD pipelines
  • PagerDuty for incident management
  • Zendesk for customer support

We’re also working on:

  • Visual integration builder for no-code MCP server creation
  • Integration testing framework
  • Performance benchmarking suite
  • Security scanning for integration configurations

Getting Started

All integrations are available now in Cortex v2.1. To use them:

  1. Update to the latest version of Cortex
  2. Enable desired integrations in your configuration
  3. Provide credentials via environment variables or secrets management
  4. Grant agents access through RBAC policies
  5. Start building workflows that span your entire stack

Complete documentation for each integration is available at docs.cortex.dev/integrations.

Conclusion

The expanded MCP server ecosystem transforms what’s possible with Cortex. AI agents are no longer isolated systems - they’re integrated participants in your development, operations, and business workflows.

This release represents months of work from our team and the community. We’re excited to see what you build with these new capabilities.


For detailed integration documentation and examples, visit docs.cortex.dev/integrations.

#Product Updates #MCP #Integrations #APIs #Developer Tools