MCP Server

MCP server

fai implements the Model Context Protocol (MCP). Any MCP-compatible AI tool can connect to your workbench and read your project context, capture new knowledge, and run workspace intelligence tools.

How to start

The MCP server starts automatically on every fai session. No flags required.

fai                    MCP server on port 4967 + your AI launches
fai --mcp=stdio        stdio transport (for tools that require it; your AI does not launch)
fai --mcp=<port>       HTTP on a custom port
fai --share            HTTP + public relay tunnel (share with remote tools)
fai --preview          Opens MCP Inspector in your browser — see what your AI sees

To target a specific session, pass fai --sessionId=<id>. If the target session was previously archived, it is automatically unarchived.

fai --share opens a 4-tier public relay: tunnel.fathym.com → localtunnel.me → cloudflared → localhost.run SSH. If all tiers fail, fai still runs locally.

Note: --mcp=stdio and --share cannot be combined.

Preview mode

fai --preview

Opens MCP Inspector pointed at your running fai MCP server. The inspector shows every tool your AI can call, every vault entry it reads, and the full project context fai has built from your sessions. You can browse entries, call tools yourself, and verify exactly what your AI sees.

The inspector process is managed by fai — it starts automatically and shuts down when your session ends.

Tools

Session tools

ToolWhat it does
fai_session_captureWrite one capture entry to the current session
fai_session_capture_batchWrite multiple captures in one call
fai_session_petnameRegister a vocabulary term for the current project or personally
fai_agents_listList detected local coding agents

Workspace tools

ToolWhat it does
fai_workspace_listList files by glob pattern
fai_workspace_readRead file content with line pagination
fai_workspace_searchSearch file contents by regex or literal query
fai_workspace_diffGit diff across workspace repos
fai_workspace_statusGit status across workspace repos

AI agent tools

ToolWhat it does
fai_agent_consultAsk a local agent a question about the workspace
fai_agent_planGenerate a structured implementation plan
fai_agent_proposeGenerate a code change proposal (unified diffs)
fai_agent_applyApply an approved proposal
fai_agent_runRun an autonomous coding agent task
fai_agent_resultCheck the status of a background fai_agent job

Docs

ToolWhat it does
fai_read_docRead a @fathym/steward or @fathym/fai doc by URI

Run fai --preview to open MCP Inspector and see all active tools with their schemas.

Resources

Your workbench state is exposed as MCP resources:

  • FAI-SESSION.md — current session context
  • FAI-PROJECT.md — project-level accumulated knowledge
  • FAI-PERSONAL.md — personal preferences across all projects
  • Petnames — workbench and personal vocabulary registries
  • Steward docs — @fathym/steward framework reference (requires FAI_STEWARD_DOCS_PATH env)
  • fai docs — vault architecture, custom vault/workbench/mode guides

Prompts

Six core prompts guide session workflow:

  • orient — orient yourself for a fai session; read all FAI context files
  • capture/guide — learn how to contribute back to fai (capture structure, batching)
  • workspace/guide — workspace intelligence tools and escalation workflow
  • fai/workspace — quick reference for file tools (list, read, search, diff, status)
  • fai/agents — quick reference for local AI agent tools
  • fai/edit-session — frame a coding session for AI-assisted code editing

Six holonic development prompts for building fai extensions:

  • holonic/create-capability / holonic/create-agent / holonic/create-workflow
  • holonic/audit / holonic/explain / holonic/compose-steward

Connecting Claude Code

Add to your MCP config (.claude/settings.json or similar):

{
  "mcpServers": {
    "fai": {
      "command": "fai",
      "args": ["--mcp=stdio"]
    }
  }
}

When Claude Code starts this MCP server, fai auto-joins the most recent session without prompting. To target a specific session, add "--sessionId=<id>" to the args array.

fai also writes CLAUDE.md directly when you run fai orient — Claude Code reads your full workbench context on every session start.

Connecting Cursor

fai writes .cursor/rules/fai-context.mdc with your current project context. Cursor reads this automatically. For live MCP connection, add to Cursor's MCP settings with command fai and args ["--mcp=stdio"].

Connecting Windsurf

fai writes .windsurf/rules/fai-context.md with your current project context. For live MCP, configure Windsurf's MCP settings with command fai and args ["--mcp=stdio"].

Auto-configuration

When you run fai, it automatically writes MCP server configuration into each detected agent's native config file — no manual steps needed:

AgentConfig path (all user-level global)Transport
Claude Code~/.claude/settings.jsonstdio
Cursor~/.cursor/mcp.jsonhttp
Cline~/.cline/mcp.jsonhttp
Copilot (VS Code)~/.vscode/mcp.jsonhttp
Windsurf~/.codeium/windsurf/mcp_config.jsonhttp
Codex CLI~/.codex/config.tomlhttp
Gemini CLI~/.gemini/settings.jsonhttp
JetBrains AI~/.junie/mcp/mcp.jsonhttp
Goose~/.config/goose/config.yamlhttp
Antigravity~/.gemini/antigravity/mcp_config.jsonhttp
OpenCode~/.opencode/config.jsonhttp

All 15 fai agents receive context injection into their native rules files on every session start. The 11 listed here also get MCP server configuration written to user-level global paths. Claude uses type: 'stdio'; all others use type: 'http'.

The manual configuration examples above are for reference or to override defaults.

Skill files

fai mcp generate generates a .md skill file per MCP prompt:

fai mcp generate                        # writes to .claude/commands/ in CWD
fai mcp generate --output /path/to/dir  # custom output directory

Each prompt maps to a file by path — capture/guide.claude/commands/capture/guide.md, holonic/explain.claude/commands/holonic/explain.md. Claude Code reads .claude/commands/ as slash commands.

fai generates skill files automatically on each session start for Claude Code.

Sharing your workbench

fai --share

Opens a public tunnel so remote AI tools (claude.ai, hosted Cursor, etc.) can connect. Your workbench URL is printed on startup. Context captured by remote tools flows back into your vault.

Governance & security

fai applies a 6-layer governance chain to every MCP tool call — tool exposure control, rate limiting, API key auth, scope enforcement, audit logging, and secret redaction from results. All layers are active with no configuration required for local use.

See Governance & Security for configuration and details.

On this page