Build F1 MCP Server in VS Code with Python & Copilot

Wrap fastf1 Python package functions into an MCP server using fastmcp; load F1 sessions, compare drivers, analyze tire strategy via Copilot Chat in VS Code.

Environment Setup and F1 Data Loading

Create a project directory (mkdir f1-race-engineer-mcp), open in VS Code Insiders, and set up a Python virtual environment: python3 -m venv .venv, then activate with source .venv/bin/activate. Upgrade pip (pip install --upgrade pip) and install dependencies: pip install fastf1 pandas matplotlib pytest. Validate imports via python -c "import fastf1; import pandas; print(fastf1.__version__)").

Use fastf1 to load immutable historical F1 session data (e.g., 2023 Monaco Qualifying): enable cache once with fastf1.Cache.enable_cache("cache"). Define load_session(year, gp, session_type): session = fastf1.get_session(year, gp, session_type); session.load(); return session. Run via python -c "from app.data_loader import load_session; print(load_session(2023, 'Monaco', 'Q'))". Cache creates SQLite DB in ./cache/ with data for 20 drivers, including laps, sectors, driver info (name, team, etc.). Interactive REPL testing: python, paste function to inspect structures like session.laps (columns: Time, DriverNumber, LapTime, Sector1Time, etc.).

Build additional functions: get_tire_strategy(session, driver) analyzes tire usage; compare_drivers(session, driver1, driver2) returns fastest laps, sector deltas, throttle data.

Automated Testing with Custom Copilot Agent

Skip manual TDD; configure custom agent in VS Code (.github/agents/python-test-agent.json): name "Python test agent", description for pytest cases/debugging. Grant tools: VS Code APIs (execute, read, edit, search), Microsoft Docs MCP. Instructions: work in ./tests/, prefix files test_*.py, use standalone classes with assert, AAA pattern (Arrange/Act/Assert), fixtures in conftest.py, mock externals (e.g., fastf1), no new deps beyond pytest/pytest-mock, table-driven tests.

Prompt agent in Copilot Chat: "Write comprehensive pytest suite for app/data_loader.py, comparisons.py, strategy.py." Agent scans codebase, creates to-do (fixtures first), generates conftest.py (mocks fastf1), test_data_loader.py (tests load_session edge cases like invalid GP), etc. Handles venv: inform "virtual environment already active." Runs pytest, achieves 21 passed/1 warning. Review/keep changes for verifiable suite covering data loading, comparisons, strategy.

MCP Server Wrapper and VS Code Integration

Install pip install fastmcp. In mcp_server.py, import app functions; decorate with @mcp.tool():

from fastmcp import FastMCP
from app.data_loader import load_session

mcp = FastMCP("F1 Engineer")

@mcp.tool()
def load_session_tool(...) -> str:
    session = load_session(...)
    return session.summary  # Or formatted output

@mcp.tool()
def compare_drivers_tool(session, driver1, driver2) -> str:
    # Call app.comparisons.compare_drivers
    return formatted_delta_table

@mcp.tool()
def get_tire_strategy_tool(session, driver) -> str:
    # Call app.strategy.get_tire_strategy
    return tire_analysis

if __name__ == "__main__":
    mcp.run(transport="stdio")

Add to VS Code: Cmd+Shift+P > "MCP: Add Server" > STDIO, command .venv/bin/python app/mcp_server.py, name "F1 Engineer MCP", workspace scope. Server advertises 3 tools.

Query in Copilot Chat: "Compare Leclerc and Verstappen in 2024 Monaco qualifying." Auto-selects tools: loads session (user approves), invokes compare_drivers, outputs side-by-side: lap times, sector deltas (e.g., Leclerc vs Verstappen). Enables natural language F1 analysis via cached big data.

Video description
In this video Liam will show you how to create and install a Formula 1 inspired MCP Server in Python using the FastMCP library. He explains and shows you the client/server model, the transport used with STDIO, tool discovery, tool invocation and the schema discipline. 🔗 Repo: https://github.com/liamchampton/f1-race-engineer-mcp 🤝 Connect with Liam: https://www.linkedin.com/in/liam-conroy-hampton/ #vscode #mcpserver

Summarized by x-ai/grok-4.1-fast via openrouter

8605 input / 1559 output tokens in 9018ms

© 2026 Edge