MCPs Part 1: Building a Hello World MCP Server

Open LLM-readable version of this post

A beginner's guide to building your first Model Context Protocol (MCP) server with examples of resources, tools, and prompts - the three core capabilities that power LLM integrations.

MCPs Part 1: Building a Hello World MCP Server

When ChatGPT introduced custom GPTs with OpenAPI specifications, it was a significant step forward for LLM tool integrations. However, Claude’s Model Context Protocol (MCP) has changed the game entirely. While OpenAPI focuses primarily on defining API endpoints and their parameters, MCP provides a richer, more comprehensive framework for LLM interactions that includes not just tools but also resources and prompts. This standardized protocol makes it easier to build consistent, powerful integrations without being tied to a specific model or platform, opening up new possibilities for AI-powered applications.

The Model Context Protocol (MCP) is becoming the standard way for Large Language Models to interact with external data and functionality. In this tutorial, I’ll show you how to build a simple MCP server that demonstrates the three core capabilities that make MCP powerful.

What is MCP?

MCP is a standardized protocol that allows LLMs like Claude to interface with the outside world in a secure, structured way. It creates a common language for LLMs to access data and execute actions across different applications and services.

MCP has three distinct capabilities:

  1. Resources: Static or dynamic content that LLMs can read (files, data, text)
  2. Tools: Functions that LLMs can execute to perform actions (calculations, API calls)
  3. Prompts: Templates with parameters that help LLMs generate specific outputs

Each capability serves a different purpose:

  • Resources provide information to the LLM
  • Tools enable the LLM to take actions
  • Prompts help the LLM generate structured outputs

Let’s build a simple server that implements all three capabilities.

Setting Up Your Environment

Let’s start by creating a new project and installing the necessary dependencies:

mkdir hello-mcp
cd hello-mcp
npm init -y
npm install @modelcontextprotocol/sdk zod express
npm install -D typescript @types/node @types/express

Next, modify your package.json to support TypeScript:

{
  "name": "hello-mcp",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "build": "tsc",
    "start": "node build/index.js"
  },
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.1.0",
    "zod": "^3.22.4",
    "express": "^4.18.2"
  },
  "devDependencies": {
    "@types/node": "^22.10.5",
    "@types/express": "^4.17.21",
    "typescript": "^5.7.2"
  }
}

Create a tsconfig.json file:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./build",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true
  },
  "include": ["src/**/*"]
}

Setting Up the Basic Server

Create a src directory and add an index.ts file with the basic server setup:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Initialize server
const server = new McpServer({
  name: "hello-mcp",
  version: "1.0.0"
});

// We'll implement each capability below

// Start server using stdio transport
const transport = new StdioServerTransport();
await server.connect(transport);
console.info('{"jsonrpc": "2.0", "method": "log", "params": { "message": "Server running..." }}');

Let’s now implement each of the three capabilities.

Capability 1: Resources - Providing Information

Resources are pieces of content that LLMs can access. They can be static text, files, or dynamic data that changes over time.

Implementing a Hello World Resource

// Define a simple hello world resource
server.resource(
  "hello-world",
  "hello://world",
  async (uri) => ({
    contents: [{
      uri: uri.href,
      text: "Hello, World! This is my first MCP resource."
    }]
  })
);

Note: The URI hello://world follows standard URI conventions where hello:// is the scheme (similar to http:// or file://) and world is the path. This naming matches our “Hello World” theme. In real applications, you would choose URIs that represent your specific resources, like weather://forecast/london or file://documents/report.pdf.

When an LLM needs information, it can:

  1. Request a list of available resources
  2. Select a resource by URI
  3. Read the contents of that resource

Resources are perfect for:

  • Documentation
  • Reference data
  • File contents
  • Database records
  • API responses

Capability 2: Tools - Enabling Actions

Tools are functions that LLMs can execute to perform actions or compute results. Unlike resources which are read-only, tools allow LLMs to interact with the world.

Implementing a Calculator Tool

// Define a calculator tool
server.tool(
  "calculator",
  {
    operation: z.enum(["add", "subtract", "multiply", "divide"]),
    a: z.number(),
    b: z.number()
  },
  async ({ operation, a, b }) => {
    let result;
    
    switch (operation) {
      case "add":
        result = a + b;
        break;
      case "subtract":
        result = a - b;
        break;
      case "multiply":
        result = a * b;
        break;
      case "divide":
        if (b === 0) {
          throw new Error("Division by zero");
        }
        result = a / b;
        break;
      default:
        throw new Error(`Unknown operation: ${operation}`);
    }
    
    return {
      content: [{
        type: "text",
        text: `The result of ${a} ${operation} ${b} = ${result}`
      }]
    };
  }
);

When an LLM wants to perform an action, it can:

  1. Request a list of available tools
  2. Select a tool by name
  3. Provide the required parameters
  4. Receive the result of the tool execution

Tools are ideal for:

  • Calculations
  • API calls
  • Database queries
  • File operations
  • System commands

Implementation Note: Different AI applications implement MCP capabilities to varying degrees. For example, Cursor (the AI-powered code editor) currently only implements the Tools capability, while Claude Desktop utilizes all three capabilities (Resources, Tools, and Prompts) for a more comprehensive integration.

Capability 3: Prompts - Generating Structured Output

Prompts are templates with parameters that help LLMs generate specific types of output in a consistent format. They’re like function calls but for text generation.

Implementing a Greeting Prompt

// Define a greeting prompt
server.prompt(
  "greeting",
  {
    name: z.string(),
    time_of_day: z.enum(["morning", "afternoon", "evening", "night"])
  },
  ({ name, time_of_day }) => ({
    messages: [{
      role: "user",
      content: {
        type: "text",
        text: `Hello ${name}! Good ${time_of_day}. How are you today?`
      }
    }]
  })
);

When an LLM needs to generate structured text, it can:

  1. Request a list of available prompts
  2. Select a prompt by ID
  3. Provide the required parameters
  4. Receive the generated text based on the template

Prompts are excellent for:

  • Email templates
  • Code generation patterns
  • Form responses
  • Consistent messaging
  • Multi-step generation

Testing Different Capabilities with MCP Inspector

You can test your MCP server using the MCP Inspector tool:

npx @modelcontextprotocol/inspector node build/index.js

When you open the Inspector, you can test each capability:

For Resources:

  1. Go to the “Resources” tab
  2. Click “List Resources” to see your resources
  3. Click on a resource to view its content

For Tools:

  1. Go to the “Tools” tab
  2. Click “List Tools” to see your tools
  3. Click on a tool, fill in the parameters
  4. Click “Execute” to see the result

For Prompts:

  1. Go to the “Prompts” tab
  2. Click “List Prompts” to see your prompts
  3. Click on a prompt, fill in the parameters
  4. Click “Generate” to see the output

Alternative Transport: Using SSE

While the stdio transport is perfect for local development and desktop applications, you might want to expose your MCP server over HTTP using Server-Sent Events (SSE) transport:

import express from "express";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import { z } from "zod";

// Initialize server
const server = new McpServer({
  name: "hello-mcp",
  version: "1.0.0"
});

// Set up resources, tools, and prompts as shown in the previous examples
server.resource(
  "hello-world",
  "hello://world",
  async (uri) => ({
    contents: [{
      uri: uri.href,
      text: "Hello, World! This is my first MCP resource."
    }]
  })
);

// Setup Express app
const app = express();

// Create an SSE endpoint that clients can connect to
app.get("/sse", async (req, res) => {
  const transport = new SSEServerTransport("/messages", res);
  await server.connect(transport);
});

// Create an endpoint to receive messages from clients 
app.post("/messages", express.json(), async (req, res) => {
  // Handle the message and send response
  res.json({ success: true });
});

// Start HTTP server
const port = 3000;
app.listen(port, () => {
  console.log(`MCP server running on http://localhost:${port}/sse`);
});

To use this example, you’ll need additional dependencies:

npm install express
npm install -D @types/express

When testing with the MCP Inspector, use:

npx @modelcontextprotocol/inspector http://localhost:3000/sse

The key differences when using SSE transport:

  1. Two separate endpoints are needed:
    • /sse for the Server-Sent Events stream that sends data to the client
    • /messages for receiving messages from the client
  2. The transport is created for each connection (req/res pair) rather than once for the whole app

🚧 For production apps, you’ll need to implement routing logic to ensure messages from clients are sent to the correct transport instance

This approach allows your MCP server to be accessible over HTTP, making it usable by clients anywhere on the network, not just on the local machine.

Comparing MCP Capabilities

Here’s a comparison of the three capabilities:

Capability Purpose Direction Example Use Cases
Resources Provide information LLM reads data Documentation, file contents, reference data
Tools Execute actions LLM calls functions Calculations, API calls, file operations
Prompts Structure generation LLM uses templates Email templates, code patterns, responses

The power of MCP comes from combining these capabilities:

  • Use prompts to generate consistent prompts
  • Use resources to provide context
  • Use tools to take action based on that context

Complete Server Example

Here’s a single, complete server example that implements all three MCP capabilities together using the modern TypeScript SDK API, with support for both stdio and SSE transports:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
import { z } from "zod";

// Parse command line arguments to determine transport type
const useSSE = process.argv.includes("--sse");

// Initialize server
const server = new McpServer({
  name: "hello-world",
  version: "1.0.0"
});

// === RESOURCES ===

// Define a simple hello world resource
server.resource(
  "hello-world",
  "hello://world",
  async (uri) => ({
    contents: [{
      uri: uri.href,
      text: "Hello, World! This is my first MCP resource."
    }]
  })
);

// === TOOLS ===

// Define a calculator tool
server.tool(
  "calculator",
  "The calculator for basic arithmetic operations",
  {
    operation: z.enum(["add", "subtract", "multiply", "divide"], {
      description: "The operation to perform"
    }),
    a: z.number({
      description: "The first operand"
    }),
    b: z.number({
      description: "The second operand"
    })
  },
  async ({ operation, a, b }) => {
    let result;
    
    switch (operation) {
      case "add":
      result = a + b;
      break;
      case "subtract":
      result = a - b;
      break;
      case "multiply":
      result = a * b;
      break;
      case "divide":
      if (b === 0) {
        throw new Error("Division by zero");
      }
      result = a / b;
      break;
      default:
      throw new Error(`Unknown operation: ${operation}`);
    }
    
    return {
      content: [{
        type: "text",
        text: `The result of ${a} ${operation} ${b} = ${result}`
      }]
    };
  }
);

// === PROMPTS ===

// Define a greeting prompt
server.prompt(
  "greeting",
  {
    name: z.string(),
    time_of_day: z.enum(["morning", "afternoon", "evening", "night"])
  },
  ({ name, time_of_day }) => ({
    messages: [{
      role: "user",
      content: {
        type: "text",
        text: `Hello ${name}! Good ${time_of_day}. How are you today?`
      }
    }]
  })
);

// === START SERVER ===

// Choose transport based on command line arguments
if (useSSE) {
  const app = express();
  let transport: SSEServerTransport;
  
  app.get("/sse", async (req, res) => {
    transport = new SSEServerTransport("/messages", res);
    await server.connect(transport);
  });
  
  app.post("/messages", async (req, res) => {
    // Note: to support multiple simultaneous connections, these messages will
    // need to be routed to a specific matching transport. (This logic isn't
    // implemented here, for simplicity.)
    await transport.handlePostMessage(req, res);
  });
  
  app.listen(3001);
} else {
  // Use stdio transport for local development/desktop apps
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.info('{"jsonrpc": "2.0", "method": "log", "params": { "message": "Server running with stdio transport" }}');
}

This example demonstrates all three MCP capabilities and supports two different transport types:

  1. stdio Transport: Used by default for local tools and CLI interfaces
  2. SSE Transport: Used when started with --sse flag for web-based access

To use this example, you’ll need these dependencies:

npm install @modelcontextprotocol/sdk zod express
npm install -D typescript @types/node @types/express

Testing with Different Transports

For stdio transport:

# Build the code
npm run build

# Run with stdio (default)
node build/index.js

# Test with MCP Inspector
npx @modelcontextprotocol/inspector node build/index.js

For SSE transport:

# Build the code
npm run build

# Run with SSE transport
node build/index.js --sse

# Test with MCP Inspector
npx @modelcontextprotocol/inspector http://localhost:3000/sse

The SSE implementation is simplified to support a single connection. For production applications with multiple clients, you would need a more sophisticated approach with session tracking.

Conclusion

You’ve now built an MCP server that implements all three core capabilities! This demonstrates how the Model Context Protocol provides a standardized way for LLMs to:

  1. Access information through resources
  2. Take actions through tools
  3. Generate structured prompts through prompts

MCP is still evolving, but it’s becoming an essential standard for building AI-powered applications. As LLMs become more capable, having a consistent protocol for them to interact with the world becomes increasingly important.

By understanding the distinct purposes of resources, tools, and prompts, you can design more effective integrations between LLMs and your applications.

This article was proofread and edited with AI assistance and relies on information from the Model Context Protocol documentation.

Cookies