Building an MCP server is how you make your API or data source available to any AI agent in the world — Claude, GPT-4o, Cursor, your custom agent — without writing separate integrations for each. You write the server once. Any MCP-compatible client picks it up.
Before reading this, if you want the conceptual foundation, read What Is MCP and Why It's the HTTP of the Agentic Web →. This post is the hands-on companion — we're building something real.
We build AI automation systems for clients across India, the UAE, and Singapore. MCP is now foundational infrastructure in how we connect AI agents to clients' business tools. This tutorial covers what we've learned from production — including the gotchas that aren't in the official docs.
What You'll Build
A working MCP server that exposes:
- A tool — a function the AI agent can call to perform an action
- A resource — data the AI agent can read for context
- A prompt — a reusable template that tells the agent how to use your server
You'll then connect it to Claude Desktop or the MCP Inspector to verify it works.
Prerequisites
- Node.js 18+ (TypeScript path) or Python 3.10+ (Python path)
- Basic familiarity with async/await patterns
- Understanding of what MCP is (see our MCP explainer →)
- A terminal and a code editor
We'll build the same server in TypeScript first, then show the Python equivalent. Choose whichever matches your stack.
Part 1: TypeScript MCP Server
Step 1: Project Setup
mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
Create tsconfig.json in the project root:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./build",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src/**/*"],
"exclude": ["node_modules"]
}
⚠️ Critical: Use "module": "Node16" and "moduleResolution": "Node16". The MCP SDK requires these settings. Using CommonJS or ESNext will produce import errors that aren't immediately obvious.
Update package.json to add the build script and ESM flag:
{
"type": "module",
"scripts": {
"build": "tsc",
"start": "node build/index.js"
}
}
Step 2: Create the Server
Create src/index.ts:
#!/usr/bin/env node
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
// Initialise the MCP server
const server = new McpServer({
name: "my-business-api",
version: "1.0.0",
});
// ---- TOOL: an action the AI agent can execute ----
server.registerTool(
"get_product_info",
{
description: "Retrieve product information by product ID from the catalogue",
inputSchema: {
product_id: z.string().describe("The unique product identifier"),
},
},
async ({ product_id }) => {
// In production: replace with your actual API call
const mockProduct = {
id: product_id,
name: "Sample Product",
price: 2499,
stock: 42,
category: "Electronics",
};
return {
content: [
{
type: "text" as const,
text: JSON.stringify(mockProduct, null, 2),
},
],
};
}
);
// ---- RESOURCE: data the AI can read for context ----
server.registerResource(
"catalogue-summary",
"catalogue://summary",
{
name: "Product Catalogue Summary",
description: "Overview of available product categories and counts",
mimeType: "application/json",
},
async (uri) => ({
contents: [
{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify({
total_products: 1247,
categories: ["Electronics", "Clothing", "Home", "Beauty"],
last_updated: new Date().toISOString(),
}),
},
],
})
);
// ---- PROMPT: a reusable template for working with this server ----
server.registerPrompt(
"product-lookup",
{
description: "Template for looking up product details and checking stock",
argsSchema: {
product_id: z.string().describe("Product ID to look up"),
},
},
({ product_id }) => ({
messages: [
{
role: "user" as const,
content: {
type: "text" as const,
text: `Look up the product with ID ${product_id}. Report its name, current price, stock level, and category. If stock is below 10, flag it as low stock.`,
},
},
],
})
);
// Start the server with STDIO transport
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
// ⚠️ NEVER use console.log() here — it writes to stdout and corrupts JSON-RPC messages
console.error("MCP server running on stdio");
}
main().catch(console.error);
Step 3: Build and Verify
npm run build
You should see a build/index.js file. Now test it:
npx @modelcontextprotocol/inspector node build/index.js
The MCP Inspector launches a browser UI at http://localhost:5173 where you can list tools, call them manually, and inspect requests/responses. This is the most important development tool in the MCP ecosystem — use it before connecting to any AI client.
[SCREENSHOT: MCP Inspector showing tool list with get_product_info and call interface]
Step 4: Connect to Claude Desktop
Open your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add your server:
{
"mcpServers": {
"my-business-api": {
"command": "node",
"args": ["/absolute/path/to/my-mcp-server/build/index.js"]
}
}
}
Restart Claude Desktop. You'll see a plug icon in the chat interface. Click it — your tool appears. Ask Claude "What's the product info for ID 12345?" and watch it call your server.
Part 2: Python MCP Server (FastMCP)
For Python developers, the FastMCP library provides a cleaner, decorator-based API. The same server in Python:
Step 1: Setup
mkdir my-mcp-python && cd my-mcp-python
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install mcp
Step 2: Create the Server
Create server.py:
#!/usr/bin/env python3
from mcp.server.fastmcp import FastMCP
import json
# Initialise FastMCP server
mcp = FastMCP("my-business-api")
@mcp.tool()
def get_product_info(product_id: str) -> str:
"""
Retrieve product information by product ID from the catalogue.
Returns JSON with id, name, price, stock, and category.
"""
# In production: replace with your actual API call
product = {
"id": product_id,
"name": "Sample Product",
"price": 2499,
"stock": 42,
"category": "Electronics"
}
return json.dumps(product, indent=2)
@mcp.resource("catalogue://summary")
def get_catalogue_summary() -> str:
"""Product catalogue overview with categories and counts."""
summary = {
"total_products": 1247,
"categories": ["Electronics", "Clothing", "Home", "Beauty"],
}
return json.dumps(summary)
if __name__ == "__main__":
mcp.run() # Uses STDIO transport by default
Step 3: Run and Test
# Test with MCP Inspector
npx @modelcontextprotocol/inspector python server.py
# Or run directly
python server.py
The FastMCP decorator approach is significantly less boilerplate than the low-level TypeScript SDK. For rapid iteration and Python stacks, FastMCP is the right default.
Choosing Your Transport: STDIO vs Streamable HTTP
This is the decision most tutorials skip over, and it matters for production deployment.
STDIO transport — the default in all tutorials:
- The MCP client spawns your server as a subprocess
- Communication happens through stdin/stdout pipes
- Fast, zero network overhead
- Best for: Local development, CLI tools, desktop AI integrations
- Not for: Remote servers, APIs hosted in the cloud, multi-client access
Streamable HTTP transport — introduced in the March 2025 spec update:
- Your server runs as an HTTP service
- Clients communicate via POST requests
- Server can stream responses using Server-Sent Events
- Best for: Production APIs, cloud deployment, multi-user scenarios, remote tools
- How to add it (TypeScript):
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import express from "express";
const app = express();
const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined });
app.post("/mcp", async (req, res) => {
await transport.handleRequest(req, res, req.body);
});
await server.connect(transport);
app.listen(3000);
For client-facing production deployments — Shopify AI integrations, WhatsApp agents, internal tools for clients — we use Streamable HTTP, deployed on Vercel or AWS Lambda. STDIO stays local.
Common Errors and How to Fix Them
Error: JSON parse errors, malformed responses, server crashes on connect
Cause: You used console.log() in an STDIO server. This writes to stdout, which is the same channel MCP uses for JSON-RPC messages. Every console.log() corrupts the protocol stream.
Fix: Replace every console.log() with console.error() in STDIO servers. stderr is safe.
// ❌ Breaks STDIO transport
console.log("Tool called:", tool_name);
// ✅ Safe
console.error("Tool called:", tool_name);
Error: ERR_REQUIRE_ESM or import path resolution failures
Cause: Incorrect TypeScript module settings. The MCP SDK is ESM-only.
Fix: Ensure tsconfig.json has "module": "Node16" and "moduleResolution": "Node16". Ensure package.json has "type": "module".
Error: Tool not appearing in Claude Desktop
Cause: Claude Desktop config uses a relative path, or the JSON is malformed, or the server crashes on startup.
Fix: Always use absolute paths in claude_desktop_config.json. Test with MCP Inspector first — if it fails there, it'll fail in Claude. Check Claude Desktop logs at ~/Library/Logs/Claude/.
Error: Tool descriptions confusing the AI — wrong tool called, parameters misused
Cause: Vague tool descriptions or parameter names. The AI model chooses tools based on natural language descriptions. Ambiguous descriptions produce wrong choices.
Fix: Write descriptions as if explaining to a capable but literal colleague. Be specific about what the tool does, what parameters mean, and what it returns. This is one of the most impactful improvements you can make — better descriptions mean better tool selection accuracy.
Deploying to Production
For a production remote MCP server, the stack we use:
- Streamable HTTP transport — handles multiple concurrent clients
- Vercel or AWS Lambda — serverless deployment keeps costs low
- Environment variables for API credentials — never hardcode secrets in MCP servers (prompt injection can expose them)
- Rate limiting on tool endpoints — MCP agents can call tools in tight loops
- Output logging — log all tool calls with timestamps, inputs, and outputs. This is your audit trail.
For AI automation workflows we build for clients, the MCP server architecture is typically one server per domain — a product catalogue server, an inventory server, an order management server — rather than one monolithic server with every tool. This keeps tool sets small, descriptions focused, and the AI's context uncluttered.
For the security implications of deploying MCP servers, read our LLM security guide →.
What We Built and What's Next
You now have a working MCP server with all three primitives — tool, resource, and prompt — connected to Claude Desktop and testable via MCP Inspector. The pattern scales: swap the mock data for real API calls, add authentication, deploy with Streamable HTTP, and you have production infrastructure.
The next level: read A2A vs MCP — Google vs Anthropic on Agent Interoperability → to understand how MCP fits into multi-agent architectures where agents need to talk to each other, not just to tools.
If you're evaluating whether to build custom MCP server infrastructure for your product or business, we do this for clients →. Happy to give an honest scoping conversation.
Frequently Asked Questions
Written by

Founder & CEO
Rishabh Sethia is the founder and CEO of Innovatrix Infotech, a Kolkata-based digital engineering agency. He leads a team that delivers web development, mobile apps, Shopify stores, and AI automation for startups and SMBs across India and beyond.
Connect on LinkedIn