Your First MCP Server in Python: The 3 Primitives Every Tutorial Ignores
For weeks I was manually connecting APIs to my AI agents. Each integration was a project within the project: authentication, error handling, different formats, hardcoded prompts that broke with every change.
Then I actually read the MCP spec — not the Twitter summaries — and realized I was building exactly what MCP already solves.
Here’s what nobody explains properly: MCP has three primitives, not one. Most tutorials only cover tools. But if you don’t understand resources and prompts, you’re using 30% of the protocol.
Let’s fix that today.
Quick context (no fluff)
MCP — Model Context Protocol — solves the N×M integration problem. Instead of every model learning to talk to every service differently, MCP defines a universal protocol. Anthropic launched it in November 2024, and by 2026 OpenAI, Google, and the Linux Foundation have all adopted it.
The architecture is simple:
- Host: the AI application (Claude Desktop, your custom agent)
- Client: the connector living inside the host
- Server: your code that exposes data and functionality
Transport: Stdio for local servers (faster, system access), HTTP with SSE for remote (OAuth, cloud hosting).
For this tutorial we’re using Stdio — the most direct way to start.
The 3 Primitives: Tools, Resources, and Prompts
Think of them like this:
Primitive
What it does
Who invokes it
Tool
Executes an action
The AI model
Resource
Exposes read-only data
The host / user
Prompt
Reusable instruction templates
The user
Let’s write some code.
Initial Setup
Create server.py:
Primitive 1: Tool (action)
A tool is what the model can execute. Search, create, update, delete. It has typed parameters and the model decides when to call it.
Primitive 2: Resource (read-only data)
A resource exposes data the host can read and pass as context. Key difference from a tool: resources don’t execute actions, they just return information.
Primitive 3: Prompt (reusable templates)
This is where most people miss the mark. A prompt in MCP isn’t your model’s system prompt — it’s an instruction template the user can invoke directly from the host. Think slash commands (/analyze-order) that auto-fill context.
Starting the server
Connect it to Claude Desktop via claude_desktop_config.json:
Security — don’t skip this
The MCP ecosystem already has 270+ servers in production. With that growth come real risks: prompt injection and tool attacks.
Minimum rules before any MCP server goes to production:
- Minimal permissions: your server only accesses what it needs. Nothing more.
- Validate inputs: never trust tool arguments without sanitizing them.
- Sandboxing: if you execute external code, isolate it. Always.
- Never expose secrets in resources: resources are readable by the host. Treat them as internal-public content.
What you just learned
You have a working MCP server with all three primitives:
→ Tool for the model to execute actions
→ Resource to expose context data
→ Prompt for reusable instruction templates
Most tutorials you’ll find in 2026 only teach tools. Now you know why that’s leaving half the protocol on the table.
Next real step: connect this to your actual use case — a Supabase database, your CRM, your ticket system — and start iterating. MCP isn’t theory. It’s infrastructure already running in production at companies like Block and Apollo.
Keep building.
