
The MCP Integration That Connects Voxe to Any System You Run
Every AI support platform publishes an integrations list. HubSpot, Salesforce, Shopify, Stripe — the usual names, the usual logos. These integrations cover a large share of what most businesses run. They do not cover everything. And the systems that fall outside the list are often the most important ones: the internal database your ops team built five years ago, the proprietary pricing engine your product team owns, the industry-specific platform that has no off-the-shelf connector, the internal knowledge system that exists nowhere except your company's servers.
The Model Context Protocol (MCP) Client integration in Voxe was built for exactly that gap. It is not another named connector. It is the mechanism that allows your AI agent to reach any system you control — by connecting directly to an MCP server you host yourself.
TL;DR
- The Model Context Protocol (MCP) is an open standard that allows AI agents to call tools and query data from external systems through a standardized interface.
- Voxe's MCP Client integration connects to any MCP server you host — your internal databases, custom APIs, proprietary tools, or legacy systems — without requiring a pre-built connector.
- Authentication supports bearer tokens, custom headers, and OAuth2, with all sensitive values encrypted and masked after save.
- The integration links to your AI agent automatically — no manual configuration of the underlying workflow required.
- This is what moves Voxe from "AI that knows about your product" to "AI that knows about your entire business operation."
What MCP Is and Why It Changes the Integration Problem
The Model Context Protocol is an open standard introduced by Anthropic in 2024. Its purpose is simple: give AI models a consistent, structured way to call tools and retrieve data from external systems — without each integration needing its own bespoke implementation.
Before MCP, connecting an AI to an external system required building a custom integration for each connection. Different APIs, different authentication schemes, different data formats — each one a separate engineering project. The result was that AI systems could only reach the systems someone had already built a connector for.
MCP changes the architecture. An MCP server exposes a set of tools — functions the AI can call with defined inputs and outputs. Any system that can be wrapped in an MCP server becomes immediately accessible to any AI that speaks MCP. The integration work happens once, at the server level. After that, the AI can call those tools the same way it calls any other capability.
For businesses, this means the integration question is no longer "does Voxe support X?" It becomes "can we stand up an MCP server in front of X?" The answer, for any system with an API or queryable interface, is almost always yes.
What Voxe's MCP Client Integration Does
When you add an MCP Client integration in Voxe, you are connecting your AI agent to a server you host at an endpoint you control. Voxe's role is to authenticate against that server, register the connection with your AI workflow, and make the tools your server exposes available for the AI to call during conversations.
The practical effect: your AI agent can now do things that no pre-built integration covers.
What can live on an MCP server
The tools your MCP server exposes are entirely under your control. Common implementations include:
Internal databases. Query inventory levels, customer records, order history, or any structured data that lives in your own systems. The AI can pull a live record during a conversation rather than working from a static knowledge base.
Custom business logic. Eligibility checks, pricing calculations, discount validation, account status lookups — any logic that lives in your codebase and can be wrapped as a callable function.
Proprietary platforms. Industry-specific software, vertical SaaS tools, or legacy systems that no standard integration supports. If you can write a function that talks to it, MCP can expose it.
Internal knowledge systems. Documentation wikis, internal runbooks, operational procedures, engineering specs — content that is too sensitive or too specific to put in a public knowledge base but that your AI needs to reference accurately.
Live data feeds. Real-time pricing, stock levels, shipping estimates, service status — anything that changes faster than a static knowledge base can be updated.
How to Set Up the MCP Client Integration
The setup requires two things: an MCP server running at a reachable endpoint, and the credentials to authenticate against it. Voxe handles the rest.
Step 1: Set up your MCP server
Your MCP server is the component you build and host. It defines what tools the AI can call. Anthropic and the broader MCP ecosystem provide SDKs and documentation for standing up an MCP server in most common languages. The server exposes tools as defined functions with typed inputs and outputs — the AI uses these definitions to understand what each tool does and when to call it.
Your server runs at a URL accessible from Voxe's infrastructure. This can be a public HTTPS endpoint, a privately networked endpoint if you're running Voxe on-premise, or a tunneled local endpoint for development.
Step 2: Add the integration in Voxe
- Navigate to Dashboard → Integrations
- Click Add Integration → MCP Client
- Fill in:
- Server Name — a label for this connection (e.g., "Inventory API", "Internal CRM")
- MCP Endpoint URL — the full URL of your MCP server
- Authentication Type — Bearer token, custom header, multiple headers, or OAuth2
- Auth Fields — the required credentials for your selected auth type
- Click Test Connection to verify the server is reachable and credentials are valid
- Click Save
Step 3: Confirm workflow linkage
When saved, Voxe automatically links the integration to your AI workflow and creates the required credentials in the underlying automation layer. You do not need to configure anything in the workflow manually — the MCP tools your server exposes become available to the AI agent immediately.
How Credential Security Works
Every sensitive value you enter during MCP Client setup is encrypted at rest using AES-256-GCM before being stored. After the initial save, sensitive fields are masked in the UI — they display as __REDACTED__ rather than the actual value. This is intentional and permanent.
What this means in practice:
- You cannot retrieve a saved credential value from the Voxe dashboard after saving — by design.
- If you want to update a credential, enter a new value. Masked fields left unchanged retain the existing stored secret.
- API responses never return plaintext secret values, even for authenticated requests.
- Deleting an MCP Client integration removes both the integration record and the linked credential from the automation layer — no orphaned secrets remain.
This design means your internal system credentials never exist in plaintext in any Voxe-accessible surface after the initial setup transaction.
What This Unlocks That Standard Integrations Don't
The named integrations — HubSpot, Salesforce, Pipedrive, Shopify, WooCommerce — cover CRM records, pipeline data, and commerce transactions. They are well-defined, well-documented, and well-maintained. For the systems they cover, they are the right tool.
The MCP Client covers the systems those integrations don't. The distinction matters most in three scenarios.
Proprietary internal systems
Most businesses above a certain size have systems that were built in-house — customer management tools, internal dashboards, operational databases. These systems contain some of the most valuable context your AI could have: the actual account history, the actual operational constraints, the actual internal knowledge. None of it is in a standard integration. All of it is reachable via MCP.
Industry-specific platforms
Vertical software — practice management systems, field service platforms, logistics tools, financial planning software — rarely has an off-the-shelf AI connector. MCP means the integration work is a one-time server build, not a permanent dependency on Voxe releasing a specific connector.
Real-time operational data
Static knowledge bases answer questions about how your product works. They cannot answer questions about what is happening right now — live stock levels, current service status, real-time pricing, open ticket counts. An MCP server that queries live data closes that gap. The AI's answer reflects the state of your systems at the moment the customer asks, not the state of the documentation when someone last updated it.
The Practical Limit: What MCP Client Requires
The MCP Client integration is the most powerful integration Voxe offers. It is also the one that requires the most from the team deploying it.
You need to build and maintain the MCP server. That is an engineering task — typically straightforward for a team with API development experience, but not a no-code configuration. The complexity of the server scales with the complexity of what you are exposing: a simple lookup function is a morning's work; a multi-system orchestration layer is a larger project.
For teams with internal engineering capacity, this is the integration that removes the ceiling on what the AI can do. For teams without it, the named integrations and the knowledge base cover the large majority of support use cases — AI handles 50–70% of support ticket volume from well-maintained documentation alone, without needing real-time data access.
MCP Client is the next layer, for when that isn't enough.
FAQ
What is the Model Context Protocol (MCP)?
MCP is an open standard introduced by Anthropic in 2024 that gives AI agents a consistent interface for calling tools and querying data from external systems. An MCP server exposes a set of callable functions with defined inputs and outputs. Any AI that supports MCP can call those functions without a bespoke integration for each one. It is the standardization layer that makes AI extensible to arbitrary external systems.
Do I need to build an MCP server myself?
Yes. Voxe's MCP Client connects to a server you host — it does not provide a hosted MCP server as part of the integration. You build the server, define the tools it exposes, and host it at a reachable endpoint. Anthropic and the MCP ecosystem provide SDKs for building servers in most common languages.
What authentication methods does Voxe support for MCP connections?
Bearer token, custom header, multiple custom headers, and OAuth2 (where available). All sensitive credential values are encrypted before storage and masked in the UI after the initial save.
Which Voxe plans include the MCP Client integration?
The MCP Client integration is available on Business and Enterprise plans, where expanded integration access is included. Starter and Team plans include the named integrations (CRM, calendar, API tools).
What happens to MCP credentials if I delete the integration?
Deleting an MCP Client integration removes both the integration record and the linked credential from the underlying automation layer. No orphaned credentials remain. If you re-add the integration later, you will need to re-enter the credentials.
Can I connect multiple MCP servers?
Yes, within the integration limits of your plan. Each MCP Client integration is a separate connection to a separate server with its own name, endpoint, and credentials. Multiple servers can be active simultaneously, and each one's tools are available to your AI agent independently.
How does the AI know when to call an MCP tool?
The tools exposed by your MCP server include descriptions that tell the AI what each tool does and when it is appropriate to use it. The AI uses those descriptions during conversations to decide whether a tool call is relevant to the current query. Well-written tool descriptions — clear, specific, with defined use cases — significantly improve how reliably the AI invokes the right tool at the right moment.