AI as Infrastructure: Running a Lean Office at Institutional Quality
Practical notes on how AI tools actually get used in a working family office — and where they still fall short.
I’ve been using AI tools seriously in investment operations for about two years. Here is what I’ve actually found useful, what I’ve stopped using, and where the limitations are still real.
What Works
Research synthesis is where AI earns its place. Digesting earnings transcripts, regulatory filings, academic papers, and macro commentary at speed is genuinely transformative. Not because the AI’s conclusions are trustworthy — they often aren’t — but because it collapses the time required to form a first-pass view.
Draft generation for investment memos, board summaries, and manager communication. The drafts are never finished products. But a good draft is faster to edit than a blank page is to fill.
Data structuring — turning unstructured text (manager letters, central bank minutes, news flow) into structured formats for analysis. This is underutilised and genuinely powerful.
Coding — for data pipelines, portfolio analytics, and custom reporting. I am not a developer. I now write functional Python regularly.
What Doesn’t Work
Specific market calls — AI has no edge here and the confident-sounding wrongness is dangerous.
Manager assessment — judgment about people, track records, and organisational dynamics requires human pattern recognition that current models don’t replicate.
Real-time data — the models lag. Build your information architecture around primary sources.
The Stack I Use
This changes, but currently:
- Claude — primary research assistant and drafting
- Perplexity — rapid factual synthesis with citations
- Python + Pandas — portfolio analytics (Claude writes most of the code)
- Notion — structured knowledge base for investment theses
- Linear — task and project management
None of this is exotic. The edge is in how you use it, not which tools you have.
The Honest Caveat
AI augments judgment. It does not replace it. An analyst who uses AI badly is dangerous — confident and wrong at scale. The prerequisite is having a robust framework that the tools serve, not substitute.
If you don’t know what you’re looking for, AI will help you find more of nothing faster.