Thesis-Driven Research Over Ticker Summaries

Stock assistants summarize companies from tickers; this copilot tests specific claims like "AAPL downside controlled, business quality high over 180 days." It outputs structured memos: (1) restated thesis, (2) supporting evidence (e.g., -13.82% max drawdown, 35.37% operating margin, 152.02% ROE, 15.70% revenue growth), (3) weakening evidence (e.g., -3 net EPS revisions), (4) missing evidence, (5) verdict (partially_supported), (6) bottom-line. Workflow parses thesis to {tickers: "AAPL", lookback_days: 180, thesis: "...", mode: "single"}, fetches data, computes signals, maps to support/contradict, assigns verdict. Limits prevent abuse: max 365 lookback days, 5 tickers, 10 tool calls.

MCP Client for Reliable Financial Data Access

Use client.py for EODHD MCP: EODHDMCP class initializes with API key/base_url="https://mcp.eodhd.dev/mcp". list_tools() caches tool names; call_tool(name, args, trace_id, timeout_s=25, retries=2) handles sessions, asyncio waits, returns output + metadata (trace_id, tool, args, latency_s). Traces all calls for inspectability. Fetch prices via "get_historical_stock_prices" ({ticker, start_date, end_date, period:"d", fmt:"json"}) yielding DataFrame of date/close; fundamentals via "get_fundamentals_data" ({ticker, include_financials:False, fmt:"json"}) as dict. Helpers like to_text(out) normalize outputs, bump_tool_call(state, meta) tracks usage.

Signal Computation for Evidence Layers

From prices DataFrame, compute_price_signals() yields dict: n_points, start/end_price, ret_total (end/start -1), vol_daily/annualized (std * sqrt(252)), ret_to_vol ratio, max_drawdown (min of close/cummax -1), trend_slope (polyfit log(close)). E.g., contained downside = low max_drawdown; quality returns = high ret_to_vol. Fundamentals use _to_float(x) to clean, extracting margins (operating/profit), returns (ROA/ROE), growth (quarterly revenue/earnings yoy, forward estimates), revisions (net EPS over 30 days). Python computes explicit signals first, avoiding LLM hallucination on raw data—feeds stable inputs to later reasoning for support/contradiction mapping.