mAIndala

MCP — Frequently Asked Questions

Everything you need to know about the Model Context Protocol and how it powers AI tool integrations.

What is MCP (Model Context Protocol)?

MCP is an open standard developed by Anthropic that lets AI assistants (like Claude) securely connect to external tools, data sources, and services. Think of it as a universal plug — instead of every AI model needing its own custom integration for every tool, MCP provides one consistent interface that any compliant AI agent can use.

Why was MCP created?

Before MCP, connecting an AI to external data meant building one-off integrations for each combination of model and tool. This created an M×N problem — M models × N tools = enormous duplication. MCP collapses this to M+N: build a server once, and any MCP-compatible AI can use it. Anthropic open-sourced the spec in late 2024 to foster a shared ecosystem.

How does MCP work?

MCP follows a client–server model:
  • MCP Server — exposes capabilities (tools, resources, prompts) over a defined protocol.
  • MCP Client — the AI host application (e.g. Claude Desktop, an IDE plugin) that connects to servers.
  • Transport — communication can happen over stdio (local process), HTTP with SSE, or WebSocket.

The AI discovers what tools a server offers, then calls them by name with structured arguments — much like a function call — and receives structured results it can reason over.


What can MCP servers do?

MCP servers can expose three types of capabilities:
  • Tools — callable functions (e.g. search the web, run a SQL query, send an email).
  • Resources — readable data sources the AI can access (e.g. files, database rows, API responses).
  • Prompts — reusable prompt templates the AI or user can invoke.

Which AI models and apps support MCP?

MCP is supported by Claude (via Claude Desktop and the API), Cursor, Zed, Sourcegraph Cody, and a growing list of third-party AI applications. Because it is an open standard, any developer can add MCP client support to their application. The official MCP site lists official SDKs for TypeScript, Python, Java, Kotlin, and C#.

What is the difference between MCP and function calling / tool use?

Function calling (as seen in the OpenAI or Anthropic APIs) is a model-level feature — you define tools inline in your API request and the model decides when to call them. MCP is a transport and discovery layer on top of that concept. With MCP, tools live on separate servers that can be reused across many models and applications, versioned independently, and discovered dynamically. MCP servers typically surface their tools to the AI via the same underlying mechanism as function calling, but the ecosystem and reusability are far broader.

Is MCP secure? Can I trust MCP servers?

Security is the responsibility of both the server operator and the user:
  • MCP servers run with the permissions you grant them — a file-system server only accesses paths you configure.
  • Always review what a server claims to do before connecting your AI agent to it.
  • Prefer servers from known, reputable providers or open-source projects you can audit.
  • The MCP spec includes an authorization framework (OAuth 2.1) for servers that need user-delegated access.

On mAIndala, community ratings and reviews help surface trustworthy servers. Look for high-rated, well-reviewed listings before connecting.


How do I run an MCP server locally?

Most MCP servers are distributed as npm packages or Python packages. A typical setup looks like:
# Install via npm
npx -y @modelcontextprotocol/server-filesystem /path/to/dir

# Or add to Claude Desktop config (~/.claude/claude_desktop_config.json)
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"]
    }
  }
}

After restarting Claude Desktop the new tools will appear automatically in the conversation.


What transports does MCP support?

MCP currently supports three transports:
  • stdio — the server is a local subprocess; the client communicates over stdin/stdout. Best for local tools.
  • HTTP + SSE — the server runs as an HTTP service using Server-Sent Events for streaming. Best for remote or shared servers.
  • Streamable HTTP — a newer, stateless variant that works over plain HTTP POST/GET without a persistent SSE connection.

How do I build my own MCP server?

The quickest way is to use an official SDK:
# TypeScript
npm install @modelcontextprotocol/sdk

# Python
pip install mcp

Define your tools with a name, description, and JSON schema for their inputs. The SDK handles the protocol framing. Official quickstart guides are on modelcontextprotocol.io. Once your server is live, you can submit it to mAIndala so the community can discover it.


What is the difference between an MCP server and an AI agent?

An MCP server is a passive capability provider — it waits for the AI to call its tools and returns results. An AI agent is the active decision-maker — it connects to MCP servers, chooses which tools to call, and takes actions to achieve a goal. The agent is typically the LLM plus an orchestration layer (e.g. Claude Desktop, LangChain, or a custom harness).

Are MCP servers the same as plugins or GPT actions?

They serve a similar purpose — extending an AI with external capabilities — but are architecturally different. ChatGPT plugins and GPT Actions are tied to OpenAI's ecosystem. MCP is a vendor-neutral open standard; any AI application can implement an MCP client. MCP also offers richer primitives (resources, prompts) beyond simple tool calls.

Where can I find more MCP servers and resources?

Ready to explore MCP services?

Browse our catalog of 1,300+ community-rated MCP servers.

Browse the Catalog