Skip to content

Key Concepts

This page explains the mental model behind FundAdmin AI so the rest of the documentation makes sense quickly.

FundAdmin AI is a Claude Code extension

FundAdmin AI is not a separate desktop app or server. It installs into Claude Code as:

  • one top-level /fund orchestrator skill
  • 58 task-oriented sub-skills across 14 categories
  • 5 specialist markdown agent prompts used by the flagship LPA review flow
  • bundled templates and helper scripts for report generation
  • an MCP vault server for Obsidian integration (15 tools)

That means the normal usage pattern is:

  1. open a directory in your terminal
  2. start Claude Code with claude
  3. run /fund ... commands inside Claude Code
  4. review the generated markdown output in your working directory

The /fund namespace

All user-facing workflows live under a single slash-command prefix:

text
/fund <command>

Examples:

text
/fund review-lpa ./fund-iv-lpa.pdf
/fund benchmark ./fund-iv-lpa.pdf
/fund plain-english ./fund-iv-lpa.pdf
/fund report-pdf

Why this matters:

  • command discovery stays centralized
  • the docs can describe one consistent entry point
  • new users only have to remember /fund help

Skills vs agents

These two ideas are related but not the same.

Skills

A skill is the unit of workflow behavior. Each skill tells Claude Code how to perform a specific task such as:

  • reviewing an LPA
  • drafting a capital call notice
  • checking compliance
  • generating a PDF report

FundAdmin AI ships 58 of these task-specific skills across 14 categories.

Agents

An agent is a specialist perspective used inside a workflow.

The clearest example is /fund review-lpa, which fans the review out across five focused agents:

AgentFocus
Terms & ProvisionsKey clauses, definitions, and economic mechanics
Risk AssessmentSeverity, exposure, and GP-/LP-favorability issues
Regulatory ComplianceFATCA/CRS/AML/ERISA/AIFMD/SFDR-style concerns
Obligations & TimelineDeadlines, notice periods, reporting duties, trigger events
RecommendationsNegotiation guidance and suggested next actions

A helpful way to think about it:

  • skills define the workflow
  • agents provide depth inside the workflow

The 59-command surface

The command set is organized into 14 categories.

CategoryCommand countExamples
Document Review12review-lpa, review-ppm, review-capital-call, review-k1
Document Generation4draft-capital-call, draft-distribution
Compliance & Regulatory3check-compliance, kyc-review
Analysis & Reporting3audit-support, waterfall-review, report-pdf
Negotiation & Strategy4benchmark, plain-english, lpa-negotiate
Tracking & Monitoring4mfn-tracker, obligation-tracker
Analytics & Modeling4performance-calc, scenario-model
Investor & Operations4investor-onboard, ddq-respond
Portfolio Intelligence3portfolio-dashboard, reconcile, review-deed
Investor Compliance & Onboarding6investor-classify, fatca-crs-classify, aml-screen
Document Intake2ingest, ingest-email
Knowledge Management2vault-init, vault-sync
Advanced4multi-doc-analyze, amendment-draft
ESG, Tax & Specialized4esg-report, tax-review, wire-verify

Use the Command Reference when you want the full command-by-command lookup.

The flagship workflow: /fund review-lpa

If you only remember one command, remember this one:

text
/fund review-lpa <file>

It is the strongest entry point because it shows the whole product loop:

  1. ingest a fund document
  2. split the review across specialist perspectives
  3. generate a structured markdown report
  4. score the document
  5. tee up the next action, such as benchmarking, negotiation, or reporting

Typical outputs from a successful review include:

  • executive summary
  • LPA Safety Score
  • prioritized findings
  • provision-level notes
  • next-step recommendations

Input model

FundAdmin AI is document-oriented. Most workflows begin with one of these inputs:

  • a local file path
  • pasted document text
  • structured user-provided data for drafting or calculations

For early testing, a plain text file is enough. For higher-value runs, use the real LPA, side letter, subscription package, or other fund document you actually care about.

Output model

The default output surface is structured markdown written into your current working directory.

That matters because markdown is:

  • easy to inspect
  • easy to diff or version
  • easy to reuse as input for follow-on steps
  • easy to turn into a PDF report later

Many of the product promises in the docs should be understood through that lens: markdown first, polished export second.

PDF reports

/fund report-pdf is the packaging step, not the starting point.

Use it after you already have structured analysis you want to hand to someone else.

The PDF path depends on:

  • Python 3.8+
  • the bundled report-generation scripts
  • the reportlab dependency

So the common journey is:

text
review → analyze / benchmark / summarize → report-pdf

Trust model and limits

FundAdmin AI is designed to accelerate fund administration work, but it is still an AI-assisted system.

Use it to:

  • surface issues faster
  • structure first drafts
  • standardize analysis and packaging
  • make next actions more obvious

Do not treat it as a substitute for:

  • legal advice
  • tax advice
  • investment advice
  • jurisdiction-specific regulatory sign-off

Portfolio Intelligence

/fund portfolio-dashboard, /fund reconcile, and /fund review-deed provide real-time visibility across your portfolio. The dashboard aggregates NAV, commitments, and IRR by fund. Reconciliation flags discrepancies between GP statements and your records. Deed review extracts key terms from property transfer documents.

Document Intake pipeline

/fund ingest <file> and /fund ingest-email are the entry points for automated document processing.

The intake flow:

  1. classify the document type (LPA, capital call, K-1, etc.)
  2. extract structured metadata
  3. write a review stub to the Obsidian vault
  4. add a Kanban card in the new column

Drop a PDF into your inbox directory and the pipeline classifies, routes, and tracks it without manual intervention.

Obsidian Knowledge Base

/fund vault-init creates a local Obsidian vault at ~/FundAdmin-AI-Vault/ pre-configured with:

  • 7 Dataview-powered dashboards (funds, investors, reviews, compliance, pipeline)
  • 9 Kanban boards for review lifecycle management
  • 7 calendar templates (daily, weekly, monthly, quarterly)
  • 3 Metadata Menu FileClasses (Fund, Investor, Review)
  • 12 review subfolders

/fund vault-sync pushes review outputs from your working directory into the vault with YAML frontmatter, tags, and backlinks automatically applied.

MCP Server

The FundAdmin AI MCP vault server (mcp-vault-server/) exposes 15 tools for real-time Obsidian vault integration. Install with:

bash
claude mcp add fundadmin-vault -s user node mcp-vault-server/index.js

Key MCP tools include vault_write_review, vault_add_kanban_card, vault_search, vault_read_fund, vault_create_investor, vault_read_investor, vault_list_funds, and vault_query_tracker.

Review Lifecycle

Every review in the vault moves through a defined lifecycle:

new → under-review → action-required → completed → archived

The Kanban boards surface each stage. MCP tools like vault_move_kanban_card and vault_archive_reviews manage transitions programmatically.

Suggested mental model

If you are new, keep this compact frame in mind:

  • Claude Code is the runtime
  • /fund is the command surface
  • skills define workflows
  • agents deepen the flagship review
  • markdown is the default artifact
  • Obsidian vault is the knowledge base
  • MCP server connects vault to Claude in real time
  • PDF is the presentation layer

This tool does not provide financial, legal, or tax advice.