Key Concepts
This page explains the mental model behind FundAdmin AI so the rest of the documentation makes sense quickly.
FundAdmin AI is a Claude Code extension
FundAdmin AI is not a separate desktop app or server. It installs into Claude Code as:
- one top-level
/fundorchestrator skill - 58 task-oriented sub-skills across 14 categories
- 5 specialist markdown agent prompts used by the flagship LPA review flow
- bundled templates and helper scripts for report generation
- an MCP vault server for Obsidian integration (15 tools)
That means the normal usage pattern is:
- open a directory in your terminal
- start Claude Code with
claude - run
/fund ...commands inside Claude Code - review the generated markdown output in your working directory
The /fund namespace
All user-facing workflows live under a single slash-command prefix:
/fund <command>Examples:
/fund review-lpa ./fund-iv-lpa.pdf
/fund benchmark ./fund-iv-lpa.pdf
/fund plain-english ./fund-iv-lpa.pdf
/fund report-pdfWhy this matters:
- command discovery stays centralized
- the docs can describe one consistent entry point
- new users only have to remember
/fund help
Skills vs agents
These two ideas are related but not the same.
Skills
A skill is the unit of workflow behavior. Each skill tells Claude Code how to perform a specific task such as:
- reviewing an LPA
- drafting a capital call notice
- checking compliance
- generating a PDF report
FundAdmin AI ships 58 of these task-specific skills across 14 categories.
Agents
An agent is a specialist perspective used inside a workflow.
The clearest example is /fund review-lpa, which fans the review out across five focused agents:
| Agent | Focus |
|---|---|
| Terms & Provisions | Key clauses, definitions, and economic mechanics |
| Risk Assessment | Severity, exposure, and GP-/LP-favorability issues |
| Regulatory Compliance | FATCA/CRS/AML/ERISA/AIFMD/SFDR-style concerns |
| Obligations & Timeline | Deadlines, notice periods, reporting duties, trigger events |
| Recommendations | Negotiation guidance and suggested next actions |
A helpful way to think about it:
- skills define the workflow
- agents provide depth inside the workflow
The 59-command surface
The command set is organized into 14 categories.
| Category | Command count | Examples |
|---|---|---|
| Document Review | 12 | review-lpa, review-ppm, review-capital-call, review-k1 |
| Document Generation | 4 | draft-capital-call, draft-distribution |
| Compliance & Regulatory | 3 | check-compliance, kyc-review |
| Analysis & Reporting | 3 | audit-support, waterfall-review, report-pdf |
| Negotiation & Strategy | 4 | benchmark, plain-english, lpa-negotiate |
| Tracking & Monitoring | 4 | mfn-tracker, obligation-tracker |
| Analytics & Modeling | 4 | performance-calc, scenario-model |
| Investor & Operations | 4 | investor-onboard, ddq-respond |
| Portfolio Intelligence | 3 | portfolio-dashboard, reconcile, review-deed |
| Investor Compliance & Onboarding | 6 | investor-classify, fatca-crs-classify, aml-screen |
| Document Intake | 2 | ingest, ingest-email |
| Knowledge Management | 2 | vault-init, vault-sync |
| Advanced | 4 | multi-doc-analyze, amendment-draft |
| ESG, Tax & Specialized | 4 | esg-report, tax-review, wire-verify |
Use the Command Reference when you want the full command-by-command lookup.
The flagship workflow: /fund review-lpa
If you only remember one command, remember this one:
/fund review-lpa <file>It is the strongest entry point because it shows the whole product loop:
- ingest a fund document
- split the review across specialist perspectives
- generate a structured markdown report
- score the document
- tee up the next action, such as benchmarking, negotiation, or reporting
Typical outputs from a successful review include:
- executive summary
- LPA Safety Score
- prioritized findings
- provision-level notes
- next-step recommendations
Input model
FundAdmin AI is document-oriented. Most workflows begin with one of these inputs:
- a local file path
- pasted document text
- structured user-provided data for drafting or calculations
For early testing, a plain text file is enough. For higher-value runs, use the real LPA, side letter, subscription package, or other fund document you actually care about.
Output model
The default output surface is structured markdown written into your current working directory.
That matters because markdown is:
- easy to inspect
- easy to diff or version
- easy to reuse as input for follow-on steps
- easy to turn into a PDF report later
Many of the product promises in the docs should be understood through that lens: markdown first, polished export second.
PDF reports
/fund report-pdf is the packaging step, not the starting point.
Use it after you already have structured analysis you want to hand to someone else.
The PDF path depends on:
- Python 3.8+
- the bundled report-generation scripts
- the
reportlabdependency
So the common journey is:
review → analyze / benchmark / summarize → report-pdfTrust model and limits
FundAdmin AI is designed to accelerate fund administration work, but it is still an AI-assisted system.
Use it to:
- surface issues faster
- structure first drafts
- standardize analysis and packaging
- make next actions more obvious
Do not treat it as a substitute for:
- legal advice
- tax advice
- investment advice
- jurisdiction-specific regulatory sign-off
Portfolio Intelligence
/fund portfolio-dashboard, /fund reconcile, and /fund review-deed provide real-time visibility across your portfolio. The dashboard aggregates NAV, commitments, and IRR by fund. Reconciliation flags discrepancies between GP statements and your records. Deed review extracts key terms from property transfer documents.
Document Intake pipeline
/fund ingest <file> and /fund ingest-email are the entry points for automated document processing.
The intake flow:
- classify the document type (LPA, capital call, K-1, etc.)
- extract structured metadata
- write a review stub to the Obsidian vault
- add a Kanban card in the
newcolumn
Drop a PDF into your inbox directory and the pipeline classifies, routes, and tracks it without manual intervention.
Obsidian Knowledge Base
/fund vault-init creates a local Obsidian vault at ~/FundAdmin-AI-Vault/ pre-configured with:
- 7 Dataview-powered dashboards (funds, investors, reviews, compliance, pipeline)
- 9 Kanban boards for review lifecycle management
- 7 calendar templates (daily, weekly, monthly, quarterly)
- 3 Metadata Menu FileClasses (Fund, Investor, Review)
- 12 review subfolders
/fund vault-sync pushes review outputs from your working directory into the vault with YAML frontmatter, tags, and backlinks automatically applied.
MCP Server
The FundAdmin AI MCP vault server (mcp-vault-server/) exposes 15 tools for real-time Obsidian vault integration. Install with:
claude mcp add fundadmin-vault -s user node mcp-vault-server/index.jsKey MCP tools include vault_write_review, vault_add_kanban_card, vault_search, vault_read_fund, vault_create_investor, vault_read_investor, vault_list_funds, and vault_query_tracker.
Review Lifecycle
Every review in the vault moves through a defined lifecycle:
new → under-review → action-required → completed → archivedThe Kanban boards surface each stage. MCP tools like vault_move_kanban_card and vault_archive_reviews manage transitions programmatically.
Suggested mental model
If you are new, keep this compact frame in mind:
- Claude Code is the runtime
/fundis the command surface- skills define workflows
- agents deepen the flagship review
- markdown is the default artifact
- Obsidian vault is the knowledge base
- MCP server connects vault to Claude in real time
- PDF is the presentation layer
What to read next
- Getting Started — install, verify, update, uninstall
- Quick Start Tutorial — a runnable first-use walkthrough
- Command Reference — the full command catalog
- PDF Reports — more on the reporting path