As I’ve discussed in my previous articles, the Model Context Protocol (MCP) goes far beyond simple tool invocation. Its advanced capabilities enable sophisticated interactions between servers and LLM-driven clients, creating experiences that feel natural and responsive.
Note: The code examples in this article are simplified for clarity and meant to illustrate concepts. Production implementations would need additional error handling and security considerations.
MCP has many useful features that make AI assistants work better with users. Instead of just running simple commands, these features let users and AI assistants work together naturally, like two people having a natural, real-life conversation.
Imagine this conversation:
User: Analyze my entire codebase and fix any security issues
AI: I'll scan your codebase for security vulnerabilities. This is a large task...
[Progress: 5%] Scanning 2,341 files...
[Progress: 15%] Found 3 potential SQL injection risks...
[Progress: 30%] Checking authentication endpoints...
User: Actually, just focus on the API folder for now
AI: Understood! Cancelling the full scan...
Switching to API folder only (127 files)
[Progress: 50%] Analyzing authentication logic...
[Progress: 100%] Analysis complete!
Found 2 issues that need attention:
1. Missing rate limiting on /api/login
2. API keys stored in plain text
Would you like me to fix these automatically?
This natural flow is possible because MCP provides progress tracking, cancellation, and much more. In this guide, we’ll explore every advanced feature with simple, practical examples you can implement today.
What We’ll Cover in This Article
- Progress Tracking for Long-Running Operations
- Cancellation Support
- Connection Health with Ping
- Intelligent Autocompletions
- Structured Logging
- Pagination for Large Results
- Roots for File System Security
- Sampling for AI-Generated Content
- Resource Subscriptions
Progress Tracking for Long-Running Operations
Progress tracking keeps users informed during time-consuming tasks. Behind the scenes, your MCP server sends periodic progress notifications to the client.
This continuous feedback helps users understand what’s happening and how long they might need to wait. Here’s how it looks in practice:
Real-Life Example: Analyzing a Large Codebase
User: Analyze all TypeScript files in my project for code quality issues
AI: I'll analyze all TypeScript files in your project. This might take a few minutes...
[Progress: 10%] Found 127 TypeScript files to analyze
[Progress: 25%] Analyzing components directory (32 files)...
[Progress: 40%] Analyzing services directory (18 files)...
[Progress: 55%] Analyzing utils directory (24 files)...
[Progress: 70%] Running ESLint checks...
[Progress: 85%] Running type checking...
[Progress: 100%] Analysis complete!
AI: I've completed the analysis. Here's what I found:
- Total files analyzed: 127
- Critical issues: 3
- Warnings: 14
- Code suggestions: 42
Would you like me to show the details?
MCP API Reference
- Send progress updates:
notifications/progress
- Required fields:
progressToken
,progress
,total
(optional),message
(optional) - Client provides:
progressToken
in request_meta
field
Simple Implementation
Here’s how to implement progress tracking in your MCP server:
// CLIENT SIDE - Request with progress tracking
const response = await client.request("tools/analyze", {
path: "/my/project",
_meta: { progressToken: "abc123" }
});
// Listen for progress updates
client.on("notifications/progress", (update) => {
console.log(`${update.progress}% - ${update.message}`);
// Update progress bar here
});
// SERVER SIDE - Send progress updates
async function analyzeFiles(path, progressToken) {
const files = await getFiles(path);
for (let i = 0; i < files.length; i++) {
await analyzeFile(files[i]);
// Send progress update
if (progressToken) {
await server.notify("notifications/progress", {
progressToken,
progress: i + 1,
total: files.length,
message: `Analyzing ${files[i]}`
});
}
}
return results;
}
When to Use Progress Tracking
Progress tracking is ideal for:
- File processing operations
- Data analysis tasks
- Build and compilation processes
- API data synchronization
- Large file uploads/downloads
Cancellation Support
Users need to stop long-running operations. Here’s how cancellation works:
Real-Life Example: Stopping a Database Migration
User: Migrate all customer data from the old database format
AI: Starting database migration. This will process approximately 50,000 records...
[Progress: 5%] Migrated 2,500 records...
[Progress: 12%] Migrated 6,000 records...
User: Actually, cancel that - I need to check something first
AI: Understood. Cancelling the migration...
Migration cancelled at 6,000 records. The database is in a safe state.
Already migrated records have been rolled back.
MCP API Reference
- Receive cancellation:
notifications/cancelled
- Fields:
requestId
,reason
(optional) - Server should: Stop processing, clean up resources, optionally rollback changes
Simple Implementation
// CLIENT SIDE - Send cancellation
async function cancelOperation(requestId) {
await client.notify("notifications/cancelled", {
requestId,
reason: "User cancelled"
});
}
// Start operation and store request ID
const requestId = "migration-123";
const response = client.request("tools/migrate", {
data: myData,
_meta: { requestId }
});
// Cancel if needed
await cancelOperation(requestId);
// SERVER SIDE - Handle cancellation
const operations = new Map();
async function migrateData(data, requestId) {
operations.set(requestId, { cancelled: false });
for (const item of data) {
// Check if cancelled
if (operations.get(requestId)?.cancelled) {
return { status: "cancelled" };
}
await processItem(item);
}
operations.delete(requestId);
return { status: "completed" };
}
// Listen for cancellations
server.on("notifications/cancelled", ({ requestId }) => {
const op = operations.get(requestId);
if (op) op.cancelled = true;
});
Connection Health with Ping
Ping ensures the MCP connection stays alive and responsive. This typically shows up as a simple status indicator:
Real-Life Example: MCP Connection Status
[Your AI Assistant Interface]
🟢 MCP Server Connected [Tools: 12 available]
User: Analyze my project structure
MCP API Reference
- Handle ping requests:
ping
- Response: Empty object
{}
- Purpose: Verify server is alive and responsive
Simple Implementation
// SERVER SIDE - Handle ping requests
server.onRequest("ping", () => {
return {}; // Just return empty object
});
// CLIENT SIDE - Check connection health
async function checkConnection() {
try {
await client.request("ping");
updateStatus("🟢 Connected");
} catch (error) {
updateStatus("🔴 Disconnected");
}
}
function updateStatus(message) {
document.getElementById('status').textContent = message;
}
// Check every 30 seconds
setInterval(checkConnection, 30000);
Intelligent Autocompletions
Completions help users by suggesting valid options:
Real-Life Example: File Path Suggestions
User: Open the configuration file at /src/con
AI: Here are the matching files:
- /src/config.json
- /src/config.ts
- /src/constants.ts
- /src/controllers/
User: Open the configuration file at /src/config.json
AI: Opening /src/config.json...
MCP API Reference
- Handle completion requests:
completion/complete
- Receive:
ref
(reference object),argument
(name and value) - Return:
completion
withvalues
array,hasMore
boolean,total
(optional)
Simple Implementation
// CLIENT SIDE - Request completions
async function getCompletions(partial) {
const result = await client.request("completion/complete", {
ref: { type: "ref/prompt", name: "open_file" },
argument: { name: "filePath", value: partial }
});
return result.completion.values;
}
// Show suggestions as user types
input.oninput = async (e) => {
const suggestions = await getCompletions(e.target.value);
showDropdown(suggestions);
};
// SERVER SIDE - Provide completions
server.onRequest("completion/complete", ({ argument }) => {
if (argument.name === "filePath") {
const files = getMatchingFiles(argument.value);
return { completion: { values: files } };
}
return { completion: { values: [] } };
});
function getMatchingFiles(partial) {
return allFiles
.filter(file => file.startsWith(partial))
.slice(0, 10);
}
Real-Life Example: Command Suggestions
User: Run the build
AI: Which build command would you like to run?
[opens a dropdown]
- build:dev - Build for development
- build:prod - Build for production
- build:test - Build and run tests
- build:docs - Build documentation
User: [selects the build:prod]
AI: Running production build...
Structured Logging
Logging helps with debugging and monitoring:
Real-Life Example: Debugging API Integration
User: Debug why the payment API is failing
AI: I'll enable detailed logging and test the payment API...
[INFO] PaymentAPI: Starting test transaction
[DEBUG] PaymentAPI: Request headers: { "Content-Type": "application/json" }
[DEBUG] PaymentAPI: Request body: { amount: 99.99, currency: "USD" }
[ERROR] PaymentAPI: Request failed - Status 401
[ERROR] PaymentAPI: Response: { error: "Invalid API key" }
[INFO] PaymentAPI: Checking API key configuration...
[WARNING] PaymentAPI: API key format appears incorrect (missing prefix)
AI: I found the issue! The API key is missing the required "pk_live_" prefix.
The payment API is expecting keys in the format "pk_live_xxxxx" but yours starts with just "xxxxx".
MCP API Reference
- Send log messages:
notifications/message
- Required fields:
level
,logger
,message
- Optional fields:
data
(structured data) - Handle log level changes:
logging/setLevel
Simple Implementation
// CLIENT SIDE - Set log level and listen
await client.request("logging/setLevel", { level: "debug" });
client.on("notifications/message", ({ level, logger, message }) => {
console.log(`[${level}] ${logger}: ${message}`);
// Show in UI log console
});
// SERVER SIDE - Send log messages
function log(level, component, message, data) {
server.notify("notifications/message", {
level,
logger: component,
message,
data: sanitize(data)
});
}
function sanitize(data) {
if (!data) return;
const clean = { ...data };
['password', 'token', 'key'].forEach(key => {
if (clean[key]) clean[key] = "[REDACTED]";
});
return clean;
}
// Use in your code
async function processPayment(data) {
log("info", "Payment", "Processing payment");
try {
const result = await callAPI(data);
log("info", "Payment", "Payment successful");
return result;
} catch (error) {
log("error", "Payment", "Payment failed", { error });
throw error;
}
}
Pagination for Large Results
Pagination handles large datasets efficiently:
Real-Life Example: Browsing Search Results
User: Search for all TODO comments in the codebase
AI: Found 247 TODO comments across your codebase. Showing first 20:
1. src/api/auth.js:45 - TODO: Add rate limiting
2. src/api/users.js:89 - TODO: Implement pagination
3. src/components/Header.jsx:12 - TODO: Add dark mode toggle
... (17 more)
Click 'next' button to see more results...
User: [clicks next button]
AI: Showing results 21-40 of 247:
21. src/utils/validation.js:34 - TODO: Add email validation
22. src/utils/crypto.js:78 - TODO: Switch to better hashing
... (18 more)
MCP API Reference
- No specific protocol required - implement in your tool responses
- Common pattern: Use
cursor
parameter for pagination position - Return:
nextCursor
for next page,hasMore
boolean,total
count (optional)
Simple Implementation
// CLIENT SIDE - Request paginated results
let currentCursor = null;
async function searchTodos(query, cursor = null) {
const response = await client.request("tools/searchTodos", {
query,
cursor
});
currentCursor = response.nextCursor;
return response;
}
// Start new search
async function startSearch(query) {
currentCursor = null;
const results = await searchTodos(query);
showResults(results.items);
return results;
}
// Load more results
async function loadMore(query) {
if (!currentCursor) return;
const results = await searchTodos(query, currentCursor);
appendResults(results.items);
return results;
}
function showResults(items) {
console.log("Search results:");
items.forEach(item => {
console.log(`${item.number}. ${item.file}:${item.line} - ${item.text}`);
});
}
// SERVER SIDE - How to handle pagination
async function searchTodos(query, cursor) {
const pageSize = 20;
const allResults = await findAllTodos(); // Get all TODOs
// Parse cursor to get current position
const start = cursor ? parseInt(cursor) : 0;
const end = start + pageSize;
// Get page of results
const pageResults = allResults.slice(start, end);
const hasMore = end < allResults.length;
return {
results: pageResults.map((todo, i) => ({
number: start + i + 1,
file: todo.file,
line: todo.line,
text: todo.text
})),
nextCursor: hasMore ? end.toString() : null,
total: allResults.length,
showing: `${start + 1}-${Math.min(end, allResults.length)}`
};
}
// Handle paginated requests
server.on("tools/searchTodos", async ({ query, cursor }) => {
const results = await searchTodos(query, cursor);
return {
items: results.results,
nextCursor: results.nextCursor,
summary: `Showing ${results.showing} of ${results.total} results`
};
});
Roots for File System Security
Roots ensure your server only accesses allowed directories:
Real-Life Example: Safe File Operations
User: Delete all temporary files in my system
AI: I can only access directories you've explicitly allowed. Currently, I have access to:
- /Users/you/project
- /Users/you/downloads
I cannot access system directories for safety reasons. Would you like me to clean temporary files in your project directory instead?
User: Yes, clean temporary files in the project directory
AI: Cleaning temporary files in /Users/you/project...
Deleted 15 files:
- .tmp/cache_2024.tmp
- build/.temp.js
- node_modules/.cache (directory)
MCP API Reference
- Query allowed directories:
roots/list
- Returns: Array of
roots
withuri
property - Client capability: Check
roots
in client capabilities - Use for: Validating file access before operations
Simple Implementation
// CLIENT SIDE - Provide roots to server
let allowedRoots = [
"/Users/you/project",
"/Users/you/documents/work"
];
// Handle server requests for allowed roots
client.onRequest("roots/list", () => {
return { roots: allowedRoots.map(dir => ({ uri: dir })) };
});
// Add new directory
function addAllowedDirectory(directory) {
if (!allowedRoots.includes(directory)) {
allowedRoots.push(directory);
console.log(`Added access to: ${directory}`);
}
}
// SERVER SIDE - Check file access safely
let allowedDirectories = [];
async function loadAllowedRoots(client) {
if (client.capabilities?.roots) {
const { roots } = await client.request("roots/list");
allowedDirectories = roots.map(r => r.uri);
console.log("Allowed directories:", allowedDirectories);
}
}
function isFileAllowed(filePath) {
return allowedDirectories.some(root =>
filePath.startsWith(root)
);
}
async function safeReadFile(filePath) {
if (!isFileAllowed(filePath)) {
throw new Error(`Access denied: ${filePath}`);
}
return fs.readFile(filePath, 'utf-8');
}
async function safeDeleteFile(filePath) {
if (!isFileAllowed(filePath)) {
throw new Error(`Access denied: ${filePath}`);
}
return fs.unlink(filePath);
}
// Usage
server.on("tools/cleanTemp", async () => {
const tempFiles = await fs.readdir('./tmp');
let deleted = 0;
for (const file of tempFiles) {
if (file.endsWith('.tmp')) {
await safeDeleteFile(`./tmp/${file}`);
deleted++;
}
}
return { message: `Deleted ${deleted} temporary files` };
});
Sampling for AI-Generated Content
Sampling lets your server ask the AI for help:
Real-Life Example: Generating Commit Messages
User: Commit my changes
AI (Server): Let me check what files have changed...
Found changes in 3 files:
- Added user authentication to api/auth.js
- Fixed null pointer bug in utils/validation.js
- Updated styles in components/Button.css
AI (Server): I'll generate a commit message for these changes...
AI (Generated): feat: Add user authentication with validation fixes
- Implement JWT-based authentication in api/auth.js
- Fix null pointer exception in validation utils
- Update Button component styles for better consistency
User: Looks good, go ahead and commit
AI (Server): Changes committed successfully!
MCP API Reference
- Request AI generation:
sampling/createMessage
- Required:
messages
array with role/content - Optional:
maxTokens
,temperature
,stopSequences
- Client capability: Check
sampling
in client capabilities
Simple Implementation
// Ask AI to generate content
async function generateCommitMessage(changes) {
// Check if client supports sampling
if (!client.capabilities?.sampling) {
return "Updated files"; // Fallback
}
// Ask AI to generate commit message
const prompt = `Generate a commit message for these changes:
${changes.map(c => `- ${c}`).join('\n')}
Use conventional commit format (feat/fix/docs/etc).`;
const response = await client.request("sampling/createMessage", {
messages: [{
role: "user",
content: { type: "text", text: prompt }
}],
maxTokens: 100
});
return response.content.text;
}
// Usage example
server.on("tools/commit", async () => {
// Get changed files
const changes = await getGitChanges();
// Generate commit message
const message = await generateCommitMessage(changes);
// Show to user for approval
return {
message,
changes,
action: "Please review the commit message"
};
});
Real-Life Example: Generating Documentation
User: Generate documentation for the new API endpoint
AI (Server): I'll analyze the endpoint code and generate documentation...
AI (Generated): ## POST /api/users/invite
Send an invitation email to a new user.
// some documentation code parts
AI (Server): Documentation generated! Should I add this to your API docs?
Resource Subscriptions
Subscriptions notify clients when resources change:
Real-Life Example: Live Config Updates
User: Watch the config file for changes
AI: I'm now watching config.json for changes. I'll notify you when it updates.
[Later, when file changes...]
AI: config.json has been updated:
- Changed: API_URL from "staging.api.com" to "production.api.com"
- Added: NEW_FEATURE_FLAG = true
- Removed: DEBUG_MODE setting
Would you like me to restart the server to apply these changes?
MCP API Reference
- Handle subscriptions:
resources/subscribe
andresources/unsubscribe
- Send updates:
notifications/resources/updated
- Update format:
updates
array withuri
,type
,timestamp
- Server capability: Declare
resources: { subscribe: true }
Simple Implementation
// CLIENT SIDE - Subscribe to resource changes
async function watchResource(path) {
await client.request("resources/subscribe", { uri: path });
console.log(`Watching ${path}`);
}
// Listen for resource updates
client.on("notifications/resources/updated", ({ updates }) => {
updates.forEach(update => {
console.log(`${update.uri} changed`);
refreshContent(update.uri);
});
});
// Unsubscribe when done
async function stopWatching(path) {
await client.request("resources/unsubscribe", { uri: path });
}
// SERVER SIDE - Handle subscriptions
const watchers = new Map();
server.onRequest("resources/subscribe", ({ uri }) => {
const watcher = watchFile(uri, () => {
// Send update notification
server.notify("notifications/resources/updated", {
updates: [{
uri,
type: "changed",
timestamp: new Date().toISOString()
}]
});
});
watchers.set(uri, watcher);
return { success: true };
});
server.onRequest("resources/unsubscribe", ({ uri }) => {
const watcher = watchers.get(uri);
if (watcher) {
watcher.close();
watchers.delete(uri);
}
return { success: true };
});
Bringing It All Together
Here’s a complete example that combines multiple features:
Real-Life Scenario: Smart Build System
User: Build and deploy my application to production
AI: I'll build and deploy your application. Let me check a few things first...
✓ Git status: clean (no uncommitted changes)
✓ Current branch: main
✓ Last deployment: 3 days ago (v2.0.9)
This deployment will include:
- 12 new commits
- 3 bug fixes
- 2 new features
- Database migration required
Shall I proceed with the deployment to production?
User: Yes, go ahead
AI: Starting deployment process...
[Progress: 5%] Creating backup of current production...
[Progress: 10%] Running unit tests...
[Progress: 15%] ✓ Component tests: 89/89 passed
[Progress: 20%] ✓ Integration tests: 38/38 passed
[Progress: 25%] All tests passed! (127/127)
[Progress: 30%] Building production bundle...
[Progress: 35%] Minifying JavaScript (2.3MB → 487KB)
[Progress: 40%] Optimizing images...
[Progress: 45%] Compressing assets with gzip...
[Progress: 50%] Build complete! Total size: 1.2MB
[Progress: 55%] Uploading to production server...
[Progress: 60%] Uploaded 1.2MB of 1.2MB...
User: Wait, cancel that - I forgot to update the version number in package.json!
AI: 🛑 Stopping deployment immediately...
[Progress: 60%] Upload paused...
✓ Deployment cancelled successfully
✓ Rolled back uploaded files from production server
✓ Production is still running v2.0.9 (unchanged)
✓ Build artifacts saved locally at: ./dist/build-20240115-143022/
No changes were made to production. Your app is safe!
User: Okay, I've updated the version to 2.1.0. Can you resume?
AI: I'll resume the deployment with the updated version. Good news:
- Tests already passed ✓ (no code changes)
- Build is still fresh ✓ (created 2 minutes ago)
- I just need to update the version and continue uploading
Resuming deployment...
[Progress: 30%] Updated version number in build...
[Progress: 60%] Uploading to production server...
[Progress: 65%] Uploaded 1.2MB...
[Progress: 70%] Verifying file integrity...
[Progress: 75%] Running database migrations...
[Progress: 80%] ✓ Migration completed: added 'last_login' column
[Progress: 85%] Switching traffic to new version...
[Progress: 90%] Running health checks...
[Progress: 95%] ✓ All endpoints responding
[Progress: 100%] Deployment complete!
🎉 Success! Your application v2.1.0 is now live
📊 Deployment Summary:
- URL: https://app.example.com
- Version: 2.0.9 → 2.1.0
- Duration: 3 minutes 27 seconds
- Downtime: 0 seconds (zero-downtime deployment)
- First paint time: 1.2s (improved from 1.8s)
The new features are now available to all users!
Conclusion
The Model Context Protocol’s advanced features transform simple tool servers into sophisticated, interactive systems. By implementing these capabilities, you create experiences that feel natural and responsive:
- Progress notifications keep users informed during long operations
- Cancellation support lets users stop operations gracefully
- Ping/health checks ensure reliable connections
- Completions provide helpful suggestions
- Structured logging aids debugging
- Pagination handles large datasets efficiently
- Roots ensure secure file system access
- Sampling leverages AI for content generation
- Subscriptions enable real-time updates
Start with the features most relevant to your use case, then gradually add others. The examples in this guide are intentionally simple to help you understand the concepts - you can build upon them to create more sophisticated implementations for your specific needs.
For more examples and reference implementations, check out the Model Context Protocol SDK and join the community to share your experiences building advanced MCP servers.
Try It Yourself
Want to explore these advanced MCP features hands-on? The server-everything MCP provides a comprehensive example implementation that demonstrates many of these capabilities in action. Use it with the MCP Inspector to experiment with progress tracking, cancellation, completions, and other advanced features covered in this guide.
This article was proofread and edited with AI assistance.