The Guardrails That Let You Share
5.5% of open-source MCP servers have tool poisoning vulnerabilities — active, exploitable ones, not theoretical (Glama AI). OAuth 2.1 was finalized in June 2025. Most MCP servers still run with zero authentication. A tool description is a prompt. A crafted one can redirect what your AI does before you've had a chance to review it.
The MCP ecosystem is moving fast. The guardrails haven't kept up.
The Protocol Is an Open Door
MCP is a protocol, not a platform. It defines how AI tools talk to servers — the handshake, the tool schema, the message format. It doesn't define auth, rate limiting, audit trails, or secret redaction. That's by design. The protocol is deliberately minimal.
Which is fine — until you share.
Run a local MCP server for your own use, and the open-door design costs you nothing. But add fai --share and expose your workbench to remote AI tools, a collaborator, or even just yourself on another machine — and you've published a URL with full tool access to your session vault. No credentials. No rate limits. No record of what ran.
What does it take to share your workbench and stay in control?
Six Guardrails, No Config Required
fai ships with six layers of protection active from day one. No setup. No configuration file required for local use.
What's visible. You control which tools any client can even discover — not just which ones they can call. A hidden tool returns "method not found," same as if it didn't exist. No enumeration surface.
How fast. Rate limiting stops a runaway agent loop before it burns through your quota. Token bucket, 100 requests per minute with a 20-request burst. The limit compounds with everything else — not a substitute for auth, but a floor that works with it.
Who's calling. API key auth is disabled by default for local use. One config line enables it. When you run fai --share, you decide whether to require a key — and the same key validates both stdio connections and HTTP Bearer headers.
What they can do. Three scope tiers: session:read for workspace navigation, session:write for capturing knowledge, session:agent for running code agents. A read-only client can't trigger fai_agent_apply regardless of what it asks for.
What happened. Every tool call is logged — tool name, CorrelationId, timestamp, latency, outcome, and a SHA-256 hash of the params. Not the params themselves. The hash proves the call was made without putting credentials in the log.
What they see. Bearer tokens, OpenAI keys, GitHub tokens, fai API keys, private keys, passwords — any of these appearing in a tool result get replaced with [REDACTED:*] before the response leaves fai. The AI never sees them.
All six are active. Nothing to enable.
The Audit Trail That Earns Trust
The most useful part of the audit log isn't individual entries — it's the chain.
Every tool call gets a CorrelationId. When an AI agent runs fai_agent_plan → fai_agent_propose → fai_agent_apply, all three entries share the same ID. The full workflow is reconstructable from a single grep. You can see what a remote client did, in order, without ambiguity.
The log also captures TokensConsumed per agent call. Datadog reduced their AI costs by 40% once they had per-tool token visibility. fai logs this automatically — same call, same audit entry.
The file is yours: ~/.fai/mcp-audit.jsonl, append-only, plain JSON, never rotated automatically, stays local. The audit trail is what transforms "anyone with the URL could connect" into "I know exactly who called what and when."
What Changes When You Can Share
fai --share opens a 4-tier public relay — your workbench becomes reachable from claude.ai, hosted Cursor, a remote session, anywhere. Context captured by remote tools flows back into your vault. Your vault compounds from every session, on every machine, from every agent that connects.
The guardrails are what make that safe. Not as a gate — as the architecture underneath. They're not restricting what you share. They're what make sharing something you can actually do.
"Guardrails don't exist to PREVENT creation. They exist to ENABLE boldness." — Benjamin Mann, Anthropic
The Invitation
If you're building MCP tools, these six layers are the floor. Every one is configurable — tighten scope tiers, add custom redaction patterns, adjust rate limits, point the audit log somewhere else. Run fai --preview to open MCP Inspector and see all active tools and their schemas in real time.
See the full guardrail configuration in the fai MCP Server docs.
Build anything with AI. Keep everything. Evolve forever. "Keep everything" includes keeping your workbench yours when you share it.
Read more: Your Sessions Don't Belong to Claude →