TL;DR: I uploaded a 40-line .faf file to Grok. It instantly understood my entire project. We shipped a production-ready GitHub code reviewer in one session with zero context drift. Grok's own words: "It felt like working with a partner who'd been on the project for months."
grok-faf-mcp: Persistent AI context for xAI's Grok
The Problem
Every AI coding session starts the same way:
"I'm building a TypeScript project with Hono and Zod. We use Result types instead of throwing. The entry point is src/index.ts. We prefer pure functions. Here's the file structure..."
Ten minutes of context-setting before you write a single line of code. And if you start a new session? Do it again. Context drift kicks in around message 20. By message 50, the AI has forgotten half your constraints.
I got tired of this. So I built .faf (Foundational AI-context Format) - a simple file format that captures your project's DNA in a way any AI can instantly understand.
The Experiment
I wanted to test .faf on Grok. Not through a curated demo, but a real build session. The task: create a GitHub PR code reviewer powered by Grok's API.
I uploaded this 40-line file:
# grok-code-reviewer.faf
# Foundational AI-Context Format v2.8
[project]
name = "grok-code-reviewer"
version = "0.1.0"
description = "AI-powered code review agent using Grok"
[stack]
language = "TypeScript"
runtime = "Node.js 20+"
framework = "Hono"
[dependencies]
core = ["openai", "hono", "zod", "dotenv"]
github = ["octokit", "@octokit/webhooks"]
[structure]
src/index.ts = "API server entry point"
src/agent.ts = "Core Grok review agent logic"
src/github/ = "GitHub API integration"
[ai_instructions]
validation = "Always use Zod schemas"
style = "Prefer pure functions"
errors = "Use Result types over throwing"
api = "Use Grok-4 model for analysis" Grok's immediate response:
Grok instantly parsed the .faf file and locked in full project context
"project.faf loaded perfectly - v2.8 format recognized, score estimates ~95/100. I now have complete, persistent context for grok-code-reviewer. This is locked in. No more re-explaining the project - ever." — Grok
What We Built
In one continuous session, with zero re-explaining:
| Step | Output |
|---|---|
| 1. Bootstrap | Full project skeleton (package.json, tsconfig, .env) |
| 2. Agent | Grok-4 review logic with Zod validation |
| 3. GitHub | Webhook handler + Octokit integration |
| 4. Review | Auto-post reviews with line comments |
Then I reviewed the code. Found 9 issues. Asked Grok to fix them. It did. Found one more. Fixed that too. 10 total. Zero left.
9 files. Zero context drift. Production-ready.
The Key Insight
After the session, I asked Grok what surprised it most about working with .faf:
"Most AI sessions feel like collaborating with someone who has short-term memory loss - constantly re-onboarding. With .faf, it felt like working with a partner who'd been on the project for months." — Grok
And the technical takeaway:
"Stop treating context as ephemeral tokens. Treat it as persistent, structured project memory - because when you do, the human and AI stop fighting friction and start compounding velocity." — Grok
The Shift
.faf doesn't just save time. It unlocks a qualitatively different kind of collaboration - one where the AI is truly inside the project, not outside looking in.
Why This Matters
Grok has a 2M token context window. That's the hardware. But without structure, those tokens are just a larger bucket for unorganized context to drift around in.
.faf is the interface. It tells the AI:
- What this project is
- How it's structured
- What conventions to follow
- What the AI should and shouldn't do
Long context is the hardware. .faf is the OS.
40 lines. Instant alignment. Zero drift.
Try It Yourself
The format is open and works with any AI:
- Claude: Official MCP server in Anthropic's ecosystem
- Grok: grok-faf-mcp server (17 tools, 19ms, zero config)
- Any AI: Just upload the .faf file - they understand it
Full Conversation
The entire build session is public. Watch Grok go from loading the .faf to shipping production code: