Table of Contents
Introduction
goose’s true power lies not just in its AI capabilities, but in its extensibility through the Model Context Protocol (MCP). MCP is an open standard that enables developers to build secure, two-way connections between AI systems and external data sources, transforming goose from a capable AI assistant into a customizable automation platform that can integrate with virtually any tool, service, or data source in your development workflow.
This comprehensive technical guide explores the architecture, protocols, and best practices for building robust MCP extensions for goose. Whether you’re integrating with existing services, exposing proprietary data sources, or creating entirely new AI-powered workflows, understanding MCP’s architecture is essential for advanced goose usage.
Understanding the Model Context Protocol (MCP)
What is MCP?
The Model Context Protocol (MCP) follows a client-host-server architecture where each host can run multiple client instances, built on JSON-RPC, providing a stateful session protocol focused on context exchange and sampling coordination between clients and servers.
Think of MCP as the “USB-C” of AI integrations; a universal protocol that standardizes how AI systems connect to external tools and data sources. Before MCP, developers had to build custom connectors for each integration, leading to fragmented, unmaintainable code. MCP provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol.
Why MCP Matters for goose
goose implements MCP as its extension system, which means:
- Interoperability: Extensions built for Goose work with other MCP-compatible clients (Claude Desktop, Cursor, Cline)
- Standardization: Clear, documented protocols for building integrations
- Security: Well-defined boundaries and capability negotiation
- Ecosystem: Access to a growing library of MCP servers
- Future-proof: As MCP evolves, Goose benefits from protocol improvements
MCP Architecture Overview
The architecture consists of three core components: hosts (the container and coordinator), clients (connecting to servers), and servers (exposing resources, tools, and prompts via MCP primitives).
┌─────────────────────────────────────┐
│ Goose Host Process │
│ ┌────────────────────────────────┐ │
│ │ Goose Application (Host) │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Client 1 │ │ Client 2 │ │ │
│ │ └────┬─────┘ └────┬─────┘ │ │
│ └───────┼─────────────┼─────────┘ │
└──────────┼─────────────┼───────────┘
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ Server 1 │ │ Server 2 │
│(Filesystem) │(Database)│
└──────────┘ └──────────┘
Host (goose Application)
- Creates and manages multiple client instances
- Coordinates LLM interactions
- Presents unified interface to users
- Handles authentication and security boundaries
Clients
- Maintain 1:1 connection with specific servers
- Handle protocol communication
- Manage capability negotiation
- Route messages between host and server
Servers (Extensions)
- Expose tools, resources, and prompts
- Implement specific functionality (file access, API calls, database queries)
- Operate independently with focused responsibilities
- Handle authentication with external services
MCP Core Concepts
1. Primitives
The Building Blocks
MCP defines three fundamental primitives that servers expose to clients:
Prompts
Prompts allow servers to provide structured messages and instructions for interacting with language models, where clients can discover available prompts, retrieve their contents, and provide arguments to customize them.
Use Cases:
- Pre-defined task templates
- Workflow orchestration
- Guided interactions
- Domain-specific instructions
Example
{
"name": "code_review",
"description": "Comprehensive code review prompt",
"arguments": [
{
"name": "language",
"description": "Programming language",
"required": true
},
{
"name": "focus_area",
"description": "Specific area to review (security, performance, style)",
"required": false
}
]
}
Resources
Resources represent structured data or content that provides context to the AI model.
Characteristics
- URI-addressable (e.g.,
file:///path/to/file
,postgres://db/table
) - Can be text, binary, or structured data
- Support subscription for real-time updates
- Provide metadata and MIME types
Use Case
- File system access
- Database records
- API responses
- Configuration data
- Documentation
Example
{
"uri": "file:///project/src/auth.ts",
"name": "Authentication Module",
"description": "User authentication logic",
"mimeType": "text/x-typescript"
}
Tools
Tools are executable functions that allow the AI model to perform actions or retrieve information.
Characteristics
- Defined input/output schemas (JSON Schema)
- Can be marked as readOnly or destructive
- Return structured data or error messages
- Support streaming responses
Use Cases
- Execute commands
- Make API calls
- Query databases
- Modify files
- Run tests
Example
{
"name": "execute_sql_query",
"description": "Execute SQL query against database",
"inputSchema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "SQL query to execute"
},
"database": {
"type": "string",
"description": "Database name"
}
},
"required": ["query"]
}
}
2. Message Types
MCP uses JSON-RPC 2.0 for all client-server communication, defining three fundamental message types: requests (bidirectional messages expecting responses), responses (successful results or error messages), and notifications (one-way messages requiring no response).
Request/Response Pattern
// Request
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "read_file",
"arguments": {
"path": "/path/to/file.txt"
}
}
}
// Response
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [
{
"type": "text",
"text": "File contents here..."
}
]
}
}
Notification Pattern
{
"jsonrpc": "2.0",
"method": "notifications/resources/updated",
"params": {
"uri": "file:///project/package.json"
}
}
3. Capability Negotiation
MCP uses a capability-based negotiation system where clients and servers explicitly declare their supported features during initialization, with servers declaring capabilities like resource subscriptions, tool support, and prompt templates, while clients declare capabilities like sampling support and notification handling.
Server Capability Declaration
{
"capabilities": {
"resources": {
"subscribe": true,
"listChanged": true
},
"tools": {
"listChanged": true
},
"prompts": {
"listChanged": true
},
"logging": {}
}
}
Client Capability Declaration
{
"capabilities": {
"roots": {
"listChanged": true
},
"sampling": {}
}
}
Why Capability Negotiation Matters
- Enables graceful feature detection
- Allows protocol evolution without breaking changes
- Provides clear contract between client and server
- Enables clients to adapt behavior based on server capabilities
4. Connection Lifecycle
Every MCP session follows a defined lifecycle
┌─────────────┐
│ Initialize │ ◄── Capability negotiation
└──────┬──────┘
│
▼
┌─────────────┐
│ Active │ ◄── Normal operation (requests/responses/notifications)
└──────┬──────┘
│
▼
┌─────────────┐
│ Shutdown │ ◄── Graceful termination
└─────────────┘
1. Initialization Phase
// Client sends initialize request
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": { /* client capabilities */ },
"clientInfo": {
"name": "goose",
"version": "1.0.0"
}
}
}
// Server responds with capabilities
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": { /* server capabilities */ },
"serverInfo": {
"name": "filesystem-server",
"version": "1.0.0"
}
}
}
// Client sends initialized notification
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}
- Active Phase
- Clients make tool calls, retrieve resources, fetch prompts
- Servers send notifications (resource updates, log messages)
- Both parties respect declared capabilities
- Shutdown Phase
// Client sends shutdown request (optional)
{
"jsonrpc": "2.0",
"id": 999,
"method": "shutdown"
}
// Connection closes
5. Transport Mechanisms
MCP supports two primary transport mechanisms:
STDIO (Standard Input/Output)
Used for local servers running as separate processes.
Characteristics
- Process-based communication
- Stdin for requests, stdout for responses
- Stderr for logging
- Simple, efficient for local integrations
When to Use
- Local development tools
- File system access
- Database connections
- Command-line utilities
goose Configuration
extensions:
filesystem:
type: stdio
command: "uv"
args: ["run", "mcp-filesystem"]
timeout: 300
SSE (Server-Sent Events over HTTP)
Used for remote servers accessed over HTTP/HTTPS.
Characteristics
- HTTP-based communication
- Long-lived connections
- Supports webhooks for notifications
- Suitable for hosted services
When to Use
- Remote APIs
- Cloud services
- Multi-user scenarios
- Centralized deployments
goose Configuration
extensions:
remote_api:
type: sse
url: "https://api.example.com/mcp"
headers:
Authorization: "Bearer ${API_TOKEN}"
timeout: 300
goose’s Extension Architecture
How goose Implements MCP
goose acts as both an MCP host and manages multiple MCP clients, with each extension running as a separate MCP server. All of goose’s built-in extensions are MCP servers in their own right, and if you’d like to use the MCP servers included with goose with any other agent, you are free to do so.
Extension Categories in goose
1. Built-in Extensions: Shipped with goose
- Developer tools
- File system operations
- Screen capture
- Computer control
2. Remote Extensions: From MCP server directory
- GitHub integration
- Slack connector
- Database adapters
- API wrappers
3. Custom Extensions: Your own MCP servers
- Proprietary tools
- Internal APIs
- Domain-specific integrations
Extension Configuration
Extensions are configured in ~/.config/goose/config.yaml
:
providers:
openai:
model: gpt-4o
api_key_env: OPENAI_API_KEY
extensions:
# Built-in extension
developer:
enabled: true
# Command-line extension (STDIO)
github:
type: stdio
command: "npx"
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_TOKEN: "${GITHUB_TOKEN}"
timeout: 300
# Remote extension (SSE)
custom_api:
type: sse
url: "https://mcp.company.com/api"
headers:
Authorization: "Bearer ${API_KEY}"
timeout: 300
# Local Python extension
database_tools:
type: stdio
command: "uv"
args: ["run", "/path/to/db-tools/.venv/bin/db-mcp"]
env:
DATABASE_URL: "${DATABASE_URL}"
timeout: 600
Managing Extensions
Via CLI
Extensions can be managed through the CLI using the goose configure command.
# Open configuration wizard
goose configure
# Select "Add Extension"
# Choose extension type:
# - Command-line Extension (STDIO)
# - Remote Extension (SSE)
# - From Directory (pre-built servers)
# Example: Adding Wikipedia reader
goose configure
> Add Extension
> Command-line Extension
> Name: Wikipedia Reader
> Command: uvx mcp-wiki
> Timeout: 300
> Environment variables? No
Via Desktop UI
In the desktop interface, click the button in the top-left to open the sidebar, click the Extensions button, and under Extensions, click Add custom extension.
Steps:
- Open sidebar → Extensions
- Click “Add custom extension”
- Fill in details:
- Name
- Description
- Type (STDIO/SSE)
- Command/URL
- Environment variables
- Timeout
- Toggle extension on/off
- View available tools
Deep Link Protocol
Extensions can be installed using goose’s deep link protocol, with URL format varying based on the extension type:
goose://extension?cmd=npx&arg=-y&arg=@modelcontextprotocol/server-github&timeout=300&id=github&name=GitHub&description=GitHub%20integration
This enables one-click installation from web pages, documentation, or sharing links.
Building Custom MCP Extensions
Development Workflow
The process for building custom extensions involves initializing a project using uv, implementing the MCP server code, defining project configuration in pyproject.toml
, testing with MCP Inspector, and integrating with goose.
Let’s build a practical example: a PostgreSQL database explorer extension.
Example: PostgreSQL MCP Server
Step 1: Project Setup
# Initialize project
uv init --lib mcp-postgres
cd mcp-postgres
# Create directory structure
mkdir -p src/mcp_postgres
touch src/mcp_postgres/server.py
touch src/mcp_postgres/__main__.py
Project Structure
mcp-postgres/
├── README.md
├── pyproject.toml
├── src/
│ └── mcp_postgres/
│ ├── __init__.py # CLI entry point
│ ├── __main__.py # Module execution
│ ├── server.py # MCP server implementation
│ └── py.typed # Type hints marker
└── uv.lock
Step 2: Implement Server Logicsrc/mcp_postgres/server.py
import os
import psycopg2
from typing import List, Dict, Any
from mcp.server.fastmcp import FastMCP
from mcp.shared.exceptions import McpError
from mcp.types import ErrorData, INTERNAL_ERROR, INVALID_PARAMS
# Initialize MCP server
mcp = FastMCP("postgres-explorer")
def get_connection():
"""Get database connection from environment"""
db_url = os.getenv("DATABASE_URL")
if not db_url:
raise McpError(
ErrorData(
INVALID_PARAMS,
"DATABASE_URL environment variable not set"
)
)
try:
return psycopg2.connect(db_url)
except Exception as e:
raise McpError(
ErrorData(INTERNAL_ERROR, f"Database connection failed: {str(e)}")
) from e
@mcp.tool()
def list_tables(schema: str = "public") -> List[Dict[str, str]]:
"""
List all tables in the specified schema.
Args:
schema: Database schema name (default: public)
Returns:
List of tables with names and descriptions
"""
conn = get_connection()
try:
with conn.cursor() as cur:
cur.execute("""
SELECT
table_name,
obj_description((table_schema||'.'||table_name)::regclass) as description
FROM information_schema.tables
WHERE table_schema = %s
ORDER BY table_name
""", (schema,))
results = cur.execute.fetchall()
return [
{
"table_name": row[0],
"description": row[1] or "No description"
}
for row in results
]
finally:
conn.close()
@mcp.tool()
def describe_table(table_name: str, schema: str = "public") -> Dict[str, Any]:
"""
Get detailed schema information for a table.
Args:
table_name: Name of the table
schema: Database schema (default: public)
Returns:
Table schema including columns, types, and constraints
"""
conn = get_connection()
try:
with conn.cursor() as cur:
# Get column information
cur.execute("""
SELECT
column_name,
data_type,
character_maximum_length,
is_nullable,
column_default
FROM information_schema.columns
WHERE table_schema = %s AND table_name = %s
ORDER BY ordinal_position
""", (schema, table_name))
columns = [
{
"name": row[0],
"type": row[1],
"max_length": row[2],
"nullable": row[3] == "YES",
"default": row[4]
}
for row in cur.fetchall()
]
# Get primary key
cur.execute("""
SELECT column_name
FROM information_schema.key_column_usage
WHERE table_schema = %s
AND table_name = %s
AND constraint_name LIKE '%_pkey'
""", (schema, table_name))
primary_keys = [row[0] for row in cur.fetchall()]
# Get indexes
cur.execute("""
SELECT indexname, indexdef
FROM pg_indexes
WHERE schemaname = %s AND tablename = %s
""", (schema, table_name))
indexes = [
{"name": row[0], "definition": row[1]}
for row in cur.fetchall()
]
return {
"table_name": table_name,
"schema": schema,
"columns": columns,
"primary_keys": primary_keys,
"indexes": indexes
}
finally:
conn.close()
@mcp.tool(annotations={"readOnly": False, "destructive": False})
def execute_query(query: str, limit: int = 100) -> Dict[str, Any]:
"""
Execute a SELECT query against the database.
Args:
query: SQL SELECT statement
limit: Maximum number of rows to return (default: 100)
Returns:
Query results with column names and data
"""
# Validate query is SELECT only
if not query.strip().upper().startswith("SELECT"):
raise McpError(
ErrorData(
INVALID_PARAMS,
"Only SELECT queries are allowed"
)
)
conn = get_connection()
try:
with conn.cursor() as cur:
# Add LIMIT if not present
if "LIMIT" not in query.upper():
query = f"{query} LIMIT {limit}"
cur.execute(query)
# Get column names
columns = [desc[0] for desc in cur.description]
# Fetch results
rows = cur.fetchall()
return {
"columns": columns,
"rows": [
dict(zip(columns, row))
for row in rows
],
"row_count": len(rows)
}
finally:
conn.close()
@mcp.resource("postgres://schemas/{schema}/tables")
def get_schema_tables(schema: str) -> str:
"""Resource providing list of tables in a schema"""
tables = list_tables(schema)
return "\n".join([
f"- {t['table_name']}: {t['description']}"
for t in tables
])
@mcp.prompt()
def query_optimizer_prompt(query: str) -> str:
"""Prompt for optimizing SQL queries"""
return f"""Analyze and optimize this SQL query:
sql
{query}
Provide:
1. Performance analysis
2. Index recommendations
3. Query plan explanation
4. Optimized version
Consider:
- Join efficiency
- Index usage
- Subquery optimization
- N+1 query problems
"""
src/mcp_postgres/__init__.py
import argparse
import os
from .server import mcp
def main():
"""MCP PostgreSQL Explorer"""
parser = argparse.ArgumentParser(
description="PostgreSQL database explorer for MCP"
)
parser.add_argument(
"--database-url",
help="PostgreSQL connection URL (or set DATABASE_URL env var)"
)
args = parser.parse_args()
if args.database_url:
os.environ["DATABASE_URL"] = args.database_url
mcp.run()
if __name__ == "__main__":
main()
src/mcp_postgres/__main__.py
from mcp_postgres import main
main()
Step 3: Configure Projectpyproject.toml
[project]
name = "mcp-postgres"
version = "0.1.0"
description = "PostgreSQL explorer MCP server"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"mcp[cli]>=1.2.0",
"psycopg2-binary>=2.9.9",
]
[project.scripts]
mcp-postgres = "mcp_postgres:main"
[project.optional-dependencies]
dev = [
"pytest>=7.4.0",
"pytest-asyncio>=0.21.0",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
Step 4: Test with MCP Inspector
# Install dependencies
uv sync
# Activate virtual environment
source .venv/bin/activate
# Run in development mode
mcp dev src/mcp_postgres/server.py
Open http://localhost:5173
in your browser:
- Click “Connect” to initialize server
- Navigate to “Tools” tab
- Click “List Tools” to see available tools
- Test each tool with sample inputs
- Check “Resources” tab for available resources
- Test “Prompts” tab for defined prompts
Testing CLI
# Install locally
uv pip install .
# Verify installation
ls .venv/bin/ # Should see mcp-postgres
# Test help
mcp-postgres --help
# Test with database URL
DATABASE_URL="postgresql://user:pass@localhost/dbname" mcp-postgres
Step 5: Integrate with goose
Via CLI
goose configure
# Select "Add Extension"
# Choose "Command-line Extension"
# Enter details:
Name: PostgreSQL Explorer
Command: uv run /full/path/to/mcp-postgres/.venv/bin/mcp-postgres
Timeout: 300
Environment variables:
- DATABASE_URL: postgresql://user:pass@localhost/mydb
Via config.yaml
extensions:
postgres:
type: stdio
command: "uv"
args: ["run", "/Users/dev/mcp-postgres/.venv/bin/mcp-postgres"]
env:
DATABASE_URL: "${DATABASE_URL}"
timeout: 300
enabled: true
Via Desktop UI
- Open goose Desktop
- Click sidebar → Extensions
- Click “Add custom extension”
-
Fill in:
- Name: PostgreSQL Explorer
- Type: STDIO
- Command:
uv run /full/path/to/mcp-postgres/.venv/bin/mcp-postgres
- Environment:
DATABASE_URL=postgresql://...
- Timeout: 300
-
Toggle ON
Step 6: Use the Extension
You: goose, what tools do you have?
Goose: I have access to these PostgreSQL tools:
- list_tables: List all tables in a schema
- describe_table: Get detailed table schema
- execute_query: Run SELECT queries
You: List all tables in our database
Goose: [Calls list_tables()]
I found these tables:
- users: User accounts and profiles
- posts: Blog posts and articles
- comments: User comments on posts
- tags: Content tags
You: Describe the users table
Goose: [Calls describe_table("users")]
The users table has the following structure:
Columns:
- id (uuid, primary key)
- email (varchar(255), not null, unique)
- username (varchar(50), not null)
- created_at (timestamp, default now())
- last_login (timestamp, nullable)
Indexes:
- users_pkey: PRIMARY KEY (id)
- users_email_idx: UNIQUE (email)
- users_username_idx: INDEX (username)
You: Show me the 5 most recent users
Goose: [Calls execute_query("SELECT * FROM users ORDER BY created_at DESC")]
Here are the 5 most recent users:
1. john_doe (john@example.com) - Joined 2 hours ago
2. jane_smith (jane@example.com) - Joined 5 hours ago
...
Advanced Extension Patterns
1. Stateful Extensions
Some extensions need to maintain state across requests:
class StatefulMCPServer:
def __init__(self):
self.session_data = {}
self.cache = {}
@mcp.tool()
def start_session(self, user_id: str) -> Dict:
"""Initialize user session"""
session_id = str(uuid.uuid4())
self.session_data[session_id] = {
"user_id": user_id,
"started_at": datetime.now(),
"context": {}
}
return {"session_id": session_id}
@mcp.tool()
def update_context(self, session_id: str, key: str, value: Any) -> Dict:
"""Update session context"""
if session_id not in self.session_data:
raise McpError(ErrorData(INVALID_PARAMS, "Invalid session"))
self.session_data[session_id]["context"][key] = value
return {"status": "updated"}
2. Streaming Responses
For long-running operations, stream results:
@mcp.tool()
async def analyze_codebase(directory: str) -> AsyncIterator[str]:
"""Analyze codebase with streaming progress updates"""
files = glob.glob(f"{directory}/**/*.py", recursive=True)
yield f"Found {len(files)} Python files\n"
for i, file_path in enumerate(files):
# Analyze file
result = await analyze_file(file_path)
yield f"[{i+1}/{len(files)}] {file_path}: {result}\n"
yield "Analysis complete\n"
3. Resource Subscriptions
Enable real-time updates for resources:
@mcp.resource("file:///{path}")
async def watch_file(path: str) -> str:
"""Resource that supports subscriptions"""
with open(path) as f:
return f.read()
# When file changes, send notification
async def on_file_change(path: str):
await mcp.notify_resource_updated(f"file:///{path}")
4. Error Handling Best Practices
from mcp.types import ErrorData, INTERNAL_ERROR, INVALID_PARAMS
@mcp.tool()
def risky_operation(input_data: str) -> Dict:
"""Tool with comprehensive error handling"""
try:
# Validate input
if not input_data:
raise McpError(
ErrorData(INVALID_PARAMS, "input_data cannot be empty")
)
# Perform operation
result = perform_operation(input_data)
return {"status": "success", "result": result}
except ValueError as e:
# Input validation errors
raise McpError(
ErrorData(INVALID_PARAMS, f"Invalid input: {str(e)}")
) from e
except ConnectionError as e:
# Network/connection errors
raise McpError(
ErrorData(INTERNAL_ERROR, f"Connection failed: {str(e)}")
) from e
except Exception as e:
# Unexpected errors
raise McpError(
ErrorData(INTERNAL_ERROR, f"Unexpected error
5. Authentication and Security
import os
from functools import wraps
def require_auth(func):
"""Decorator to require authentication"""
@wraps(func)
def wrapper(*args, **kwargs):
api_key = os.getenv("API_KEY")
if not api_key:
raise McpError(
ErrorData(INVALID_PARAMS, "API_KEY not configured")
)
return func(*args, **kwargs)
return wrapper
@mcp.tool()
@require_auth
def protected_operation(data: str) -> Dict:
"""Tool requiring authentication"""
# Verify API key with external service
if not verify_api_key(os.getenv("API_KEY")):
raise McpError(
ErrorData(INVALID_PARAMS, "Invalid API key")
)
return perform_protected_operation(data)
6. Rate Limiting
from collections import defaultdict
from datetime import datetime, timedelta
class RateLimiter:
def __init__(self, max_calls: int, window_seconds: int):
self.max_calls = max_calls
self.window = timedelta(seconds=window_seconds)
self.calls = defaultdict(list)
def check_limit(self, identifier: str) -> bool:
now = datetime.now()
# Remove old calls outside window
self.calls[identifier] = [
call_time for call_time in self.calls[identifier]
if now - call_time < self.window
]
if len(self.calls[identifier]) >= self.max_calls:
return False
self.calls[identifier].append(now)
return True
limiter = RateLimiter(max_calls=10, window_seconds=60)
@mcp.tool()
def rate_limited_api_call(endpoint: str) -> Dict:
"""Tool with rate limiting"""
if not limiter.check_limit("api_calls"):
raise McpError(
ErrorData(
INTERNAL_ERROR,
"Rate limit exceeded. Try again in 60 seconds."
)
)
return call_external_api(endpoint)
Production Best Practices
1. Logging and Observability
import logging
from mcp.server.fastmcp import FastMCP
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('/var/log/mcp-server.log'),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
mcp = FastMCP("my-server")
@mcp.tool()
def monitored_operation(data: str) -> Dict:
"""Tool with comprehensive logging"""
logger.info(f"Operation started with data: {data[:50]}...")
try:
result = perform_operation(data)
logger.info(f"Operation completed successfully")
return result
except Exception as e:
logger.error(f"Operation failed: {str(e)}", exc_info=True)
raise
2. Configuration Management
from pydantic import BaseSettings, Field
from typing import Optional
class ServerConfig(BaseSettings):
"""Configuration with validation"""
database_url: str = Field(..., env="DATABASE_URL")
api_key: Optional[str] = Field(None, env="API_KEY")
timeout: int = Field(300, env="TIMEOUT")
max_connections: int = Field(10, env="MAX_CONNECTIONS")
log_level: str = Field("INFO", env="LOG_LEVEL")
class Config:
env_file = ".env"
env_file_encoding = "utf-8"
# Load configuration
config = ServerConfig()
# Use in tools
@mcp.tool()
def query_with_timeout(query: str) -> Dict:
"""Query with configured timeout"""
conn = get_connection(
config.database_url,
timeout=config.timeout
)
# ... perform query
3. Testing Extensions
# tests/test_server.py
import pytest
from mcp_postgres.server import mcp
@pytest.mark.asyncio
async def test_list_tables():
"""Test list_tables tool"""
# Mock database connection
with patch('mcp_postgres.server.get_connection') as mock_conn:
mock_cursor = MagicMock()
mock_cursor.fetchall.return_value = [
('users', 'User accounts'),
('posts', 'Blog posts')
]
mock_conn.return_value.cursor.return_value.__enter__.return_value = mock_cursor
# Call tool
result = await mcp.call_tool('list_tables', {'schema': 'public'})
# Assert results
assert len(result) == 2
assert result[0]['table_name'] == 'users'
@pytest.mark.asyncio
async def test_execute_query_validation():
"""Test query validation"""
with pytest.raises(McpError) as exc_info:
await mcp.call_tool('execute_query', {'query': 'DELETE FROM users'})
assert "Only SELECT queries are allowed" in str(exc_info.value)
@pytest.mark.asyncio
async def test_resource_access():
"""Test resource retrieval"""
with patch('mcp_postgres.server.list_tables') as mock_list:
mock_list.return_value = [
{'table_name': 'users', 'description': 'User accounts'}
]
result = await mcp.get_resource('postgres://schemas/public/tables')
assert 'users' in result
4. Performance Optimization
from functools import lru_cache
import asyncio
# Cache expensive operations
@lru_cache(maxsize=128)
def get_table_schema(table_name: str) -> Dict:
"""Cached schema retrieval"""
return fetch_schema_from_db(table_name)
# Batch operations
async def batch_process(items: List[str]) -> List[Dict]:
"""Process items concurrently"""
tasks = [process_item(item) for item in items]
return await asyncio.gather(*tasks)
# Connection pooling
from psycopg2.pool import SimpleConnectionPool
pool = SimpleConnectionPool(
minconn=1,
maxconn=10,
dsn=config.database_url
)
def get_pooled_connection():
"""Get connection from pool"""
return pool.getconn()
def return_connection(conn):
"""Return connection to pool"""
pool.putconn(conn)
5. Security Hardening
import re
from typing import List
# SQL injection prevention
def sanitize_identifier(identifier: str) -> str:
"""Sanitize SQL identifiers"""
if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*, identifier):
raise ValueError(f"Invalid identifier: {identifier}")
return identifier
# Input validation
from pydantic import BaseModel, validator
class QueryInput(BaseModel):
query: str
limit: int = 100
@validator('query')
def validate_query(cls, v):
if not v.strip().upper().startswith('SELECT'):
raise ValueError('Only SELECT queries allowed')
# Block dangerous patterns
dangerous = ['DROP', 'DELETE', 'UPDATE', 'INSERT', 'ALTER', 'CREATE']
if any(keyword in v.upper() for keyword in dangerous):
raise ValueError('Query contains forbidden keywords')
return v
@validator('limit')
def validate_limit(cls, v):
if v < 1 or v > 1000:
raise ValueError('Limit must be between 1 and 1000')
return v
# Environment variable validation
def validate_environment():
"""Validate required environment variables"""
required = ['DATABASE_URL', 'API_KEY']
missing = [var for var in required if not os.getenv(var)]
if missing:
raise RuntimeError(f"Missing required environment variables: {missing}")
Publishing and Distribution
1. Publishing to PyPI
# Build package
uv build
# Upload to PyPI
uv publish --token $PYPI_TOKEN
# Install from PyPI
uvx mcp-postgres
pyproject.toml
for PyPI
[project]
name = "mcp-postgres"
version = "0.1.0"
description = "PostgreSQL explorer MCP server"
readme = "README.md"
requires-python = ">=3.10"
license = {text = "MIT"}
authors = [
{name = "Your Name", email = "you@example.com"}
]
keywords = ["mcp", "postgresql", "database", "goose"]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
]
[project.urls]
Homepage = "https://github.com/username/mcp-postgres"
Documentation = "https://github.com/username/mcp-postgres#readme"
Repository = "https://github.com/username/mcp-postgres"
Issues = "https://github.com/username/mcp-postgres/issues"
2. Creating Installation Links
Generate deep links for easy installation
from urllib.parse import urlencode
def generate_goose_link(extension_config: Dict) -> str:
"""Generate Goose installation link"""
params = {
'cmd': extension_config['command'],
'timeout': extension_config.get('timeout', 300),
'id': extension_config['id'],
'name': extension_config['name'],
'description': extension_config['description']
}
# Add args
for i, arg in enumerate(extension_config.get('args', [])):
params[f'arg'] = arg
# Add env vars
for key, value in extension_config.get('env', {}).items():
params[f'env.{key}'] = value
base_url = "goose://extension"
return f"{base_url}?{urlencode(params)}"
# Example usage
config = {
'command': 'uvx',
'args': ['mcp-postgres'],
'timeout': 300,
'id': 'postgres',
'name': 'PostgreSQL Explorer',
'description': 'Explore PostgreSQL databases',
'env': {
'DATABASE_URL': '${DATABASE_URL}'
}
}
link = generate_goose_link(config)
print(f"Install link: {link}")
3. Documentation
Create comprehensive documentationREADME.md
# MCP PostgreSQL Explorer
PostgreSQL database explorer for Goose via Model Context Protocol.
## Features
- 📋 List all tables in a schema
- 🔍 Detailed table schema inspection
- 🔎 Execute SELECT queries safely
- 📊 Resource endpoints for real-time data
- 🎯 Optimized query prompts
## Installation
### Via uvx (Recommended)
bash
uvx mcp-postgres
### Via pip
bash
pip install mcp-postgres
### Via Goose
Click this link to install in Goose:
[Install PostgreSQL Explorer](goose://extension?cmd=uvx&arg=mcp-postgres&...)
Or manually:
bash
goose configure
## Configuration
Set the `DATABASE_URL` environment variable:
bash
export DATABASE_URL=”postgresql://user:password@localhost:5432/mydb”
Or in Goose config:
yaml
extensions:
postgres:
type: stdio
command: “uvx”
args: [“mcp-postgres”]
env:
DATABASE_URL: “${DATABASE_URL}”
timeout: 300
## Usage
Once installed, you can ask Goose:
- "List all tables in the database"
- "Describe the users table"
- "Show me the 10 most recent orders"
- "What's the schema of the products table?"
## Tools
### list_tables
List all tables in a schema.
**Parameters:**
- `schema` (optional): Schema name (default: "public")
**Returns:** List of tables with descriptions
### describe_table
Get detailed schema information.
**Parameters:**
- `table_name`: Name of the table
- `schema` (optional): Schema name (default: "public")
**Returns:** Complete table schema
### execute_query
Execute SELECT queries safely.
**Parameters:**
- `query`: SQL SELECT statement
- `limit` (optional): Max rows (default: 100)
**Returns:** Query results
## Resources
- `postgres://schemas/{schema}/tables` - List of tables in schema
## Security
- Only SELECT queries allowed
- SQL injection prevention
- Connection pooling
- Rate limiting
## Development
bash
git clone https://github.com/username/mcp-postgres
cd mcp-postgres
uv sync
uv run pytest
mcp dev src/mcp_postgres/server.py
## License
MIT License - see LICENSE file
Advanced Topics
1. Multi-Protocol Support
Supporting both STDIO and SSE
import sys
import asyncio
from mcp.server.fastmcp import FastMCP
from mcp.server.sse import SseServerTransport
from mcp.server.stdio import StdioServerTransport
mcp = FastMCP("multi-protocol-server")
# ... define tools, resources, prompts ...
async def main():
"""Run server with appropriate transport"""
if "--sse" in sys.argv:
# Run as SSE server
transport = SseServerTransport("/sse")
await mcp.run(transport)
else:
# Run as STDIO server
transport = StdioServerTransport()
await mcp.run(transport)
if __name__ == "__main__":
asyncio.run(main())
2. Dynamic Tool Generation
Generate tools dynamically based on configuration
from typing import List, Dict
import yaml
def generate_tools_from_config(config_path: str):
"""Generate MCP tools from configuration file"""
with open(config_path) as f:
config = yaml.safe_load(f)
for endpoint in config['endpoints']:
def create_tool(endpoint_config):
@mcp.tool()
def dynamic_tool(**kwargs) -> Dict:
f"""
{endpoint_config['description']}
"""
return call_api(
endpoint_config['url'],
method=endpoint_config['method'],
params=kwargs
)
dynamic_tool.__name__ = endpoint_config['name']
return dynamic_tool
create_tool(endpoint)
# Load tools from config
generate_tools_from_config('tools_config.yaml')
tools_config.yaml
endpoints:
- name: get_user
description: "Retrieve user information"
url: "https://api.example.com/users/{user_id}"
method: GET
parameters:
- name: user_id
type: string
required: true
- name: create_user
description: "Create new user"
url: "https://api.example.com/users"
method: POST
parameters:
- name: email
type: string
required: true
- name: name
type: string
required: true
3. Composite Extensions
Combine multiple data sources in one extension
class CompositeExtension:
def __init__(self):
self.postgres_client = PostgresClient()
self.redis_client = RedisClient()
self.elasticsearch_client = ESClient()
@mcp.tool()
def full_text_search(self, query: str) -> Dict:
"""Search across multiple data sources"""
# Search Elasticsearch
es_results = self.elasticsearch_client.search(query)
# Enrich with database data
enriched = []
for result in es_results:
db_data = self.postgres_client.get_record(result['id'])
cache_data = self.redis_client.get(f"meta:{result['id']}")
enriched.append({
**result,
'details': db_data,
'cached_meta': cache_data
})
return {'results': enriched, 'total': len(enriched)}
4. Webhook Integration
Support real-time updates via webhooks
from fastapi import FastAPI, Request
from threading import Thread
app = FastAPI()
mcp = FastMCP("webhook-server")
@app.post("/webhook/{resource_id}")
async def handle_webhook(resource_id: str, request: Request):
"""Receive webhook and notify clients"""
data = await request.json()
# Update internal state
update_resource(resource_id, data)
# Notify MCP clients of resource update
await mcp.notify_resource_updated(f"webhook://{resource_id}")
return {"status": "received"}
def run_webhook_server():
"""Run webhook server in background"""
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
# Start webhook server in separate thread
thread = Thread(target=run_webhook_server, daemon=True)
thread.start()
# Run MCP server
mcp.run()
Troubleshooting
Common Issues and Solutions
1. Connection Timeout
Problem: Extension times out during initialization
Solutions:
- Increase timeout in config:
timeout: 600
- Optimize initialization code
- Move expensive operations to lazy loading
class LazyConnection:
def __init__(self):
self._conn = None
def get_connection(self):
if self._conn is None:
self._conn = create_connection()
return self._conn
2. Environment Variables Not Loading
Problem: Extension can’t access environment variables
Solutions:
- Verify environment variables are set in goose config
- Use
${VAR_NAME}
syntax in config.yaml - Check variable expansion:
echo $DATABASE_URL
extensions:
myext:
env:
API_KEY: "${MY_API_KEY}" # Correct
# API_KEY: "$MY_API_KEY" # Wrong
3. Tool Not Appearing in Goose
Problem: Defined tools don’t show up
Solutions:
- Verify extension is enabled
- Check MCP Inspector shows tools
- Restart Goose after changes
- Check logs for errors
# Test with MCP Inspector
mcp dev src/my_extension/server.py
# Check Goose logs
tail -f ~/.config/goose/logs/goose.log
4. Permission Denied Errors
Problem: Extension can’t access resources
Solutions:
- Check file permissions
- Verify user has necessary access
- Use absolute paths
- Check SELinux/AppArmor policies
# Fix permissions
chmod +x .venv/bin/mcp-myext
# Check access
ls -la /path/to/resource
5. Import Errors
Problem: Python modules not found
Solutions:
- Install all dependencies:
uv sync
- Verify virtual environment activation
- Check
pyproject.toml
dependencies - Use absolute imports
Performance Optimization
Benchmarking Extensions
import time
import functools
def benchmark(func):
"""Decorator to measure tool execution time"""
@functools.wraps(func)
async def wrapper(*args, **kwargs):
start = time.perf_counter()
try:
result = await func(*args, **kwargs)
duration = time.perf_counter() - start
logger.info(f"{func.__name__} completed in {duration:.3f}s")
return result
except Exception as e:
duration = time.perf_counter() - start
logger.error(f"{func.__name__} failed after {duration:.3f}s: {e}")
raise
return wrapper
@mcp.tool()
@benchmark
async def slow_operation(data: str) -> Dict:
"""Benchmarked tool"""
return await perform_operation(data)
Profiling
import cProfile
import pstats
def profile_tool():
"""Profile tool performance"""
profiler = cProfile.Profile()
profiler.enable()
# Run operation
result = expensive_operation()
profiler.disable()
stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats(20) # Top 20 functions
return result
Real-World Extension Examples
1. GitHub Integration
@mcp.tool()
def create_pull_request(
repo: str,
title: str,
body: str,
base: str = "main",
head: str = "feature"
) -> Dict:
"""Create GitHub pull request"""
github = Github(os.getenv("GITHUB_TOKEN"))
repository = github.get_repo(repo)
pr = repository.create_pull(
title=title,
body=body,
base=base,
head=head
)
return {
"number": pr.number,
"url": pr.html_url,
"state": pr.state
}
@mcp.resource("github://{owner}/{repo}/issues")
def get_issues(owner: str, repo: str) -> str:
"""Get repository issues as resource"""
github = Github(os.getenv("GITHUB_TOKEN"))
repository = github.get_repo(f"{owner}/{repo}")
issues = repository.get_issues(state="open")
return "\n".join([
f"#{issue.number}: {issue.title} ({issue.state})"
for issue in issues[:50]
])
2. Slack Integration
@mcp.tool()
def send_slack_message(channel: str, message: str) -> Dict:
"""Send message to Slack channel"""
client = WebClient(token=os.getenv("SLACK_TOKEN"))
response = client.chat_postMessage(
channel=channel,
text=message
)
return {
"ok": response["ok"],
"ts": response["ts"],
"channel": response["channel"]
}
@mcp.tool()
def search_slack_messages(query: str, count: int = 20) -> List[Dict]:
"""Search Slack messages"""
client = WebClient(token=os.getenv("SLACK_TOKEN"))
response = client.search_messages(
query=query,
count=count
)
return [
{
"text": msg["text"],
"user": msg["username"],
"channel": msg["channel"]["name"],
"timestamp": msg["ts"]
}
for msg in response["messages"]["matches"]
]
3. Jira Integration
@mcp.tool()
def create_jira_ticket(
project: str,
summary: str,
description: str,
issue_type: str = "Task"
) -> Dict:
"""Create Jira ticket"""
jira = JIRA(
server=os.getenv("JIRA_URL"),
basic_auth=(
os.getenv("JIRA_EMAIL"),
os.getenv("JIRA_API_TOKEN")
)
)
issue = jira.create_issue(
project=project,
summary=summary,
description=description,
issuetype={'name': issue_type}
)
return {
"key": issue.key,
"url": f"{os.getenv('JIRA_URL')}/browse/{issue.key}",
"status": str(issue.fields.status)
}
Conclusion
The Model Context Protocol represents a paradigm shift in how AI systems integrate with external tools and data sources. By providing a standardized, secure, and extensible framework, MCP enables developers to build powerful integrations that extend Goose’s capabilities far beyond its core functionality.
Key Takeaways
- MCP is Universal: Extensions built for Goose work with other MCP-compatible clients
- Three Primitives: Master prompts, resources, and tools to build comprehensive integrations
- Security First: Always validate inputs, handle errors gracefully, and protect sensitive data
- Test Thoroughly: Use MCP Inspector and unit tests to ensure reliability
- Document Well: Clear documentation ensures adoption and maintainability
- Think Composable: Build focused extensions that do one thing well
Moving Forward
The MCP ecosystem is rapidly growing. As you build extensions:
- Share your work: Publish to PyPI and the MCP servers directory
- Contribute back: Help improve MCP SDKs and documentation
- Join the community: Connect with other developers building MCP servers
- Stay updated: MCP is evolving—follow the specification for new features
Whether you’re building internal tools, integrating with third-party APIs, or creating entirely new workflows, MCP provides the foundation for robust, maintainable AI integrations.
Now go build something amazing!
Additional Resources
Happy building! If you create an interesting MCP extension, share it with the community, we’d love to see what you build! 🦢