Model Context Protocol (MCP) is an open standard that lets AI models call external tools in a structured, permission-aware way. DeepCurrent’s chat runtime uses MCP to execute intelligence queries, growth runs, and other operations on your behalf — and you can extend that runtime with your own tools.
Two modes
DeepCurrent Cloud MCP
Local MCP (OSS)
DeepCurrent’s built-in MCP server is already connected to the chat runtime. You do not need any setup to use it.When you chat with DeepCurrent, the AI (Gemini) calls built-in MCP tools automatically based on your message. The tools validate your entitlements and credit balance before executing.Built-in intelligence tools| Tool | What it does |
|---|
resolve_intelligence_intent | Resolve an ambiguous query into a specific package and slot set |
preview_quote_intelligence_package | Preview results and get a credit quote in one step |
quote_intelligence_package | Get a credit quote for a package (after preview) |
execute_intelligence_package | Run a quoted intelligence package and return Tier 1 results |
expand_intelligence_package | Unlock Tier 2 contact data or increase result limits |
Built-in growth tools| Tool | What it does |
|---|
resolve_growth_outcome | Parse a goal description into a structured goal_plan |
quote_growth_plan | Get a credit cost estimate for a resolved plan |
run_growth_plan | Execute an approved growth plan (deducts credits) |
get_growth_plan_status | Poll progress and retrieve result handles for a running plan |
All built-in tools carry the official badge and DeepCurrent publisher tag in tool responses. DeepCurrent’s open-source chat API supports connecting a local MCP-compatible server to the chat runtime. Your tools run on your own infrastructure; DeepCurrent forwards your user auth headers for entitlement checks.Run your MCP server locally
Start your MCP server on a local port. It must expose a Streamable HTTP transport at a /mcp path.# Example: start a local MCP server on port 8001
your-mcp-server --port 8001
Connect your server in DeepCurrent
In your DeepCurrent dashboard, go to Settings → Integrations and enter your local MCP server address (e.g. http://localhost:8001/mcp).The chat runtime connects to this URL at the start of every chat request and discovers your available tools automatically.
Chat with your tools
Your tools appear alongside the built-in DeepCurrent tools in the same chat session. The AI calls them based on your messages, following the same multi-step tool-calling flow.
Local MCP tools are executed on your infrastructure. DeepCurrent forwards your JWT or API key to the MCP server so your tools can enforce their own access controls.
When you send a message, the DeepCurrent AI model (Gemini) reads the available tools from the connected MCP server and decides which ones to call based on your intent. Tool calls happen automatically — you do not need to name tools explicitly.
The flow for a typical intelligence request looks like this:
- You send:
"find me seed-stage DeFi investors"
- The AI calls
preview_quote_intelligence_package with vc-shortlist-v1 and the extracted slots
- The tool returns a preview and a credit quote
- The AI presents the quote card; you confirm
- The AI calls
execute_intelligence_package with the quote token
- Results appear in the chat
Growth runs follow the same pattern using the four growth tools in sequence.
Supported intelligence packages via MCP
The following package IDs are valid inputs for all intelligence MCP tools:
vc-shortlist-v1
builder-discovery-v1
company-people-discovery-v1
warm-intro-paths-v1
kol-discovery-v1
user-prospect-v1
hackathon-builder-v1
See Intelligence Packages for full details on slots, output fields, and credit costs for each package.
Credit enforcement
MCP tools check your credit balance and subscription entitlement before executing any paid operation. If you do not have enough credits, the tool returns an error and no action is taken. You can top up your balance in Settings → Billing or by purchasing an add-on pack.
The quote-then-execute pattern applies to all MCP tool calls, including local ones that proxy to the DeepCurrent backend. You always see the cost before credits are deducted.