← all resources
TECHNOLOGY · GUIDE

AI Stack for the Modern FO

Curated tools I actually use: portfolio management, research synthesis, email triage, and local model infrastructure.

An honest account of the AI tools I use in a working family office, what they’re good for, and where the limits are.

Research & Synthesis

Claude (Anthropic) — Primary assistant for document analysis, investment memo drafts, and complex reasoning tasks. Best-in-class for long documents and nuanced instruction-following.

Perplexity Pro — For rapid factual synthesis with citations. Better than Google for financial research queries. Use it to find primary sources, then read those.

NotebookLM (Google) — For building a private knowledge base from uploaded PDFs (annual reports, manager letters, research). The podcast-style summaries are underrated.

Data & Analytics

Python + Pandas + Claude — I’m not a developer. Claude writes most of the code. I describe what I need; it builds it. Portfolio analytics, return calculations, correlation matrices.

Koyfin — Bloomberg alternative at a fraction of the cost. Adequate for most equity and macro data needs.

FRED (St Louis Fed API) — Free, reliable macro data. Plug directly into Python workflows.

Productivity & Knowledge Management

Notion — Investment thesis database, manager notes, deal tracking. The AI features are useful for summarising long entries.

Obsidian — Private local knowledge base for sensitive notes. No cloud sync, fully encrypted.

Linear — Task and project management. Overkill for a solo operation but worth it for clarity.

Communications

Superhuman — Email triage. The AI summary and triage features genuinely save an hour a day.

What I’m Watching

Local LLM infrastructure (Ollama + Llama / Mistral) — For sensitive data that shouldn’t go to cloud APIs. Still early but improving fast.

Structured data extraction tools — Turning PDFs (fund fact sheets, manager reports) into structured data automatically. Several startups working on this specifically for FOs.


This stack changes. The principle that doesn’t: use AI to do more of the work, not to validate whatever you already think.