Contributing Guide
FundAdmin AI is designed to be extended. This guide covers how to add new skills, agents, and templates, and how to submit contributions through the pull request process.
Adding a New Skill
Skills are the primary extension point. Each skill handles one /fund command.
Step-by-Step
1. Create the skill file
Create a new directory and SKILL.md file:
skills/fund-<name>/SKILL.mdFor example, to add a /fund valuations command:
skills/fund-valuations/SKILL.md2. Follow the skill anatomy
Your SKILL.md must include all standard sections:
---
name: fund-valuations
description: Portfolio valuation review with ASC 820 fair value hierarchy analysis
command: /fund valuations
---
# Fund Valuations Review
## Role
You are a senior fund administrator specializing in portfolio valuations...
## When This Skill Is Invoked
This skill is activated when the user runs `/fund valuations <file>`...
## Processing Workflow
### Phase 1: Ingestion
- Read the input document
- Classify document type and fund type
- Extract metadata
### Phase 2: Analysis
- [Your skill-specific analysis steps]
- Assign risk ratings to findings
### Final Phase: Output
- Format using output specification below
- Apply file naming convention
- Include disclaimer
## Input Handling
[Accepted input types and how each is processed]
## Output Format
[Section headers, tables, required fields]
## File Naming
`VALUATIONS-[fund]-[YYYY-MM-DD].md`
## Error Handling
| Scenario | Handling |
|----------|---------|
| No input provided | Prompt the user for the required input |
| [Additional scenarios] | [Handling instructions] |
## Disclaimer
This analysis is generated by AI and is intended for informational purposes only...3. Add the command to the orchestrator
Edit fund/SKILL.md and add your new command to the routing table:
| valuations | fund-valuations | Portfolio valuation review with ASC 820 analysis |4. Add the skill name to install.sh
Open install.sh and add the skill to the SKILLS array (currently 59 entries):
SKILLS=(
...
"fund-valuations"
)5. Add the skill name to uninstall.sh
Open uninstall.sh and add the same entry to its SKILLS array:
SKILLS=(
...
"fund-valuations"
)6. Update README.md
Add the new command to the README command table with its category, description, and usage example.
7. Test with sample documents
Run your new command against a real or sample document:
/fund valuations sample-portfolio-report.pdfVerify that:
- The command is recognized by the orchestrator
- The skill loads and processes the document
- Output follows the specified format
- Risk ratings are applied consistently
- The disclaimer is present
- The output file is named correctly
Adding a New Agent
Agents are used exclusively by the flagship /fund review-lpa command. Add a new agent only if LPA review needs an additional analytical perspective.
Step-by-Step
1. Create the agent file
agents/fund-<name>.mdFor example:
agents/fund-esg-factors.md2. Define the agent structure
Your agent file must include:
# Fund ESG Factors Agent
**Weight in LPA Safety Score**: [X]%
**Agent ID**: fund-esg-factors
## Mission
[What this agent analyzes and why]
## Taxonomy
[Categories and subcategories the agent evaluates]
## Analysis Process
1. Extract all relevant provisions from the document
2. Classify each provision against the taxonomy
3. Compare against applicable standards
4. Assign risk ratings (high / medium / low)
5. Score each category
6. Produce structured findings
## Scoring Criteria
[How the agent calculates its score]
## Output Format
[Exact structure of agent output]
## Handoff Instructions
[How output feeds into the synthesis engine]
## Disclaimer
[Standard disclaimer]3. Update the flagship skill
Edit skills/fund-review-lpa/SKILL.md to launch your new agent alongside the existing five. Update the synthesis step to incorporate the new agent's findings and adjust score weights so all agents sum to 100%.
4. Add to install.sh
Add the agent to the AGENTS array in install.sh:
AGENTS=(
"fund-terms"
"fund-risks"
"fund-compliance"
"fund-obligations"
"fund-recommendations"
"fund-esg-factors"
)Weight Redistribution
Adding a new agent requires redistributing the score weights across all agents so they sum to 100%. Update both the agent files and the synthesis logic in the flagship skill.
Modifying Templates
Templates control output formatting. Changes affect every command that uses the template.
Step-by-Step
1. Edit the template file
templates/<name>.md2. Maintain placeholder format
Keep the [PLACEHOLDER] format for dynamic content. Skills replace these with actual values during output generation.
**Fund Name**: [FUND_NAME]
**Review Date**: [DATE]3. Preserve section headers
Skills reference section names to place content. Renaming or removing a section header may cause the skill to generate output incorrectly.
4. Test with skills that reference the template
After modifying a template, run every command that uses it to verify output formatting:
# If you modified lpa-review-template.md:
/fund review-lpa sample-lpa.pdf
# If you modified capital-call-template.md:
/fund draft-capital-call sample-data.jsonCode Style
Follow these conventions to keep the codebase consistent:
Skill Processing Phases
All skills use multi-phase processing:
- Phase 1: Ingestion -- Read input, extract text, classify document type, gather metadata
- Phase 2+: Analysis -- Perform skill-specific analysis, apply frameworks, assign risk ratings
- Final Phase: Output -- Format results using template, apply naming convention, include disclaimer
Disclaimers
Every output includes the standard fund administration disclaimer:
This analysis is generated by AI and is intended for informational purposes only. It does not constitute legal, financial, tax, or investment advice. All findings should be reviewed by qualified professionals before being relied upon for decision-making.
Risk Indicators
Use three levels consistently across all outputs:
| Level | Meaning | Usage |
|---|---|---|
| High Risk | Critical issues, potential dealbreakers, immediate action needed | Aggressive fee structures, missing standard protections, regulatory gaps |
| Medium Risk | Concerning items, negotiation recommended | Above-market terms, vague provisions, partial compliance |
| Low Risk | Minor issues, standard provisions, generally acceptable | Slight deviations from market, cosmetic concerns, informational notes |
File Naming
All generated files follow the convention:
TYPE-[identifier]-[YYYY-MM-DD].md- TYPE is uppercase (e.g.,
LPA-REVIEW,CAPITAL-CALL,KYC-REVIEW) - Identifiers use lowercase with hyphens (e.g.,
greenfield-capital-iv) - Dates use ISO format (
YYYY-MM-DD)
Markdown Formatting
- Use ATX-style headers (
#,##,###) - Tables use pipe-delimited format with header separators
- Code blocks use triple backticks with language identifiers
- Lists use
-for unordered and1.for ordered
MCP Server and Obsidian Integration
If your skill produces a review output that should be persisted in the Obsidian Vault, you need to be aware of the MCP Vault Server integration.
MCP Vault Server
The MCP Vault Server (mcp-vault-server/index.js) exposes 15 tools via the Model Context Protocol (stdio transport). It writes to ~/FundAdmin-AI-Vault/. Contributors do not need to modify the MCP server when adding a new skill — it operates independently of skill files.
To install the server for local testing:
claude mcp add fundadmin-vault -s user node mcp-vault-server/index.jsObsidian Vault
The vault is initialized by /fund vault-init and synced by /fund vault-sync. If your new skill generates a review-type output (anything that enters the review lifecycle), verify that:
- The output file uses YAML-compatible frontmatter fields (
fund,date,status,type) - The filename follows the
TYPE-[identifier]-[YYYY-MM-DD].mdconvention - The output can be passed to
vault_write_reviewwithout modification
Review Lifecycle
New skills that produce review outputs should target the new lifecycle stage on creation. The review then moves through:
new → under-review → action-required → completed → archivedDo not hard-code lifecycle stages other than new in skill output — transitions are managed by the MCP tools.
Testing
Test Suite
The test suite has 17 tests across 3 files, all passing:
python3 -m unittest discover -s tests -v| Test file | What it covers |
|---|---|
test_generate_fund_pdf.py | PDF generator CLI contract (error messages, argument parsing) |
test_fund_report_pdf_contracts.py | Cross-skill consistency (filename families, placeholders, skill contracts) |
test_prototype_contract.py | Prototype normalizer and PDF handoff command (requires Node.js) |
When adding a new skill, check test_fund_report_pdf_contracts.py for any filename family or placeholder conventions that apply to your skill type. Do not introduce stale filename families (e.g., COMPLIANCE-CHECK-, DOCUMENT-COMPARISON-, TERMS-EXTRACTION-).
Generate Sample Data
FundAdmin AI includes a sample LPA generator for testing:
python3 generate_sample_lpa.pyThis produces a sample LPA PDF with 10 intentional issues that the review system should catch:
- Aggressive carried interest (25% with no hurdle rate)
- Weak clawback provision
- Broad GP discretion on investment period extension
- No LPAC consent required for affiliate transactions
- Vague fee offset language
- Missing key person provision
- Overly broad confidentiality clause
- 120-day opt-out trap for LP transfers
- Free GP assignment rights (no LP consent)
- No excuse or exclusion rights
Run the Flagship Review
/fund review-lpa sample-lpa.pdfVerify that the review:
- Identifies all 10 intentional issues
- Assigns appropriate risk ratings (most should be high risk)
- Calculates a low LPA Safety Score (the sample LPA is intentionally investor-unfriendly)
- Produces recommendations for each issue
- Formats output correctly using the LPA review template
Test Other Commands
For document generation commands, use sample data files:
/fund draft-capital-call sample-commitments.json
/fund draft-distribution sample-distribution-data.json
/fund draft-investor-letter quarterlyFor analysis commands, use the sample LPA or real documents:
/fund extract-terms sample-lpa.pdf
/fund benchmark sample-lpa.pdf
/fund plain-english sample-lpa.pdfPull Request Process
1. Fork the Repository
Fork the FundAdmin AI repository to your own GitHub account.
2. Create a Feature Branch
git checkout -b feature/add-valuations-skillUse descriptive branch names that indicate what the contribution does.
3. Make Changes
Follow the conventions and step-by-step guides above for adding skills, agents, or templates.
4. Test with Sample Documents
Run your changes against real or sample documents. Verify all output is correctly formatted and all existing commands still work.
5. Submit a Pull Request
Write a clear PR description covering:
- What was added or changed
- Why the change is needed
- Testing performed (commands run, documents tested, output verified)
- Breaking changes (if any existing behavior was modified)
6. Review Process
Maintainers will review the PR for:
- Adherence to skill anatomy and code style conventions
- Correctness of orchestrator routing table updates
- Presence of install.sh and uninstall.sh entries
- Output quality against sample documents
- No regressions in existing commands