DOCS
Read the product.
What cix is, what it does, where it falls short, and what changed yesterday. Sourced from the same docs the engineering team writes against.
Curated entry points
01 — Features
Features
Find before you build. Schema awareness. Impact analysis. Convention enforcement. Project orientation. Multi-assistant support. Honest behavior.
Read →
02 — Workflows
Workflows
Onboarding to a new codebase. Adding a feature. Refactoring with confidence. Pre-release cleanup.
Read →
03 — Case studies
Case studies
Measured runs on real projects. Cleanup vs. a symbol-only indexer. The 25-repo benchmark. The Apache Airflow deep-dive.
Read →
04 — Limitations
Limitations
Languages we don't fully support. Dynamic registration patterns. Schema parsing gaps. What cix does not even try to do.
Read →
05 — Compare
Compare
cix vs. grep, LSP, RAG, instruction files, fine-tuning, and doing nothing.
Read →
Full source — every doc, in our own words
The same docs the engineering team writes against.
Below is every product document, grouped by category. These are the long-form versions — the curated pages above are short entry points into this material.
Start here
Start here
The pitch, the problem, and the architecture in plain language.
Stop letting AI assistants guess your codebase
index.mdAI coding assistants are good at writing code. They are bad at remembering yours.
Overview
overview.mdcix is project-context infrastructure for AI coding assistants.
Why cix exists
why-cix.mdThe honest version of what AI coding assistants do today: they generate plausible code that may or may not fit the project they are working in.
How it works
how-it-works.mdcix has three moving parts. None of them are clever in isolation. The combination is what makes the assistant behave better.
Positioning
positioning.mdWhere cix sits in the developer tools landscape, in straightforward language.
Features
Features
A narrow surface, on purpose. Each feature exists because the assistant would otherwise have to guess.
Features
features.mdcix's surface is intentionally narrow. Each feature exists because, without it, the AI assistant would have to either guess or read its way to an answer. Together they replace a large chunk of the manual exploration work that happens silently in every coding session today.
Project orientation
feature-orientation.mdThe first turn of any AI-assisted coding session sets the tone for everything that follows. If the assistant doesn't understand the basics — what framework you're using, where the code lives, what the major directories are — it spends the rest of the session compensating with file reads, guesses, and clarifying questions.
Find before you build
feature-find-before-build.mdThe single most common failure mode of an AI coding assistant is duplicating something that already exists. A helper called formatcurrency gets reborn as formatMoney. A useUserProfile hook gets reinvented as useCurrentUser. A MemberCard component gets re-implemented as UserCard two folders away.
Schema awareness
feature-schema-awareness.mdWhen an AI assistant writes a database query, a model, or a migration without seeing the actual schema, three things tend to happen:
Impact analysis
feature-impact-analysis.mdThe most expensive class of AI-assisted code change is the one where the assistant looks confident, makes a small edit, and quietly breaks something three modules away. The change compiles. The local tests pass. Production breaks two hours later because some downstream caller wasn't covered.
Convention enforcement
feature-convention-enforcement.mdMost teams have an opinion about where files belong and what they should be called. Few teams have a way to enforce those opinions consistently — especially against an AI assistant that has no memory of last week's PR review.
Multi-assistant support
feature-multi-agent.mdMost teams don't use one AI coding assistant. They use whichever one fits the moment — Claude Code for deep refactors, Codex for quick edits, Gemini for tasks where its long context helps. Different team members have different preferences. The same person might switch tools mid-week.
Workflows
Workflows
How the features fit together when you actually work on something.
Workflows
workflows.mdFeatures describe what the system can do. Workflows describe how those features fit together when you actually work on something.
Workflow: onboarding to a new codebase
workflow-onboarding.mdYou just joined a project. Or you're returning to one after months away. Or you inherited it from a contractor whose CONTRIBUTING.md is a single sentence and a winking emoji.
Workflow: adding a feature
workflow-feature-add.mdMost AI-assisted coding is feature work. Add an endpoint. Add a model field. Add a UI component that lists the new thing. Add the test that covers it. This is bread-and-butter work, and it's where the difference between a context-aware assistant and a context-free one shows up most clearly.
Workflow: refactoring with confidence
workflow-refactoring.mdRenames, signature changes, module moves — refactoring is the work AI assistants are most prone to doing badly without context. The change itself is small. The blast radius is what matters. And without an index, the assistant has no efficient way to see the blast radius.
Workflow: pre-release cleanup
workflow-cleanup.mdEvery team does this work eventually. A release is approaching. Someone says "let's clean up the obvious junk before we ship" — and now the next two days are a fuzzy mix of dead-code hunts, naming-overlap reviews, structural drift checks, and the slow realization that the codebase has accumulated more cruft than anyone wanted to admit.
Case studies
Case studies
Measured behavior on real codebases. Methodology, raw findings, honest discussion.
Case studies
case-studies.mdMarketing copy is easy to produce. Measured behavior on real codebases is harder. This section is the harder kind.
Case study: pre-release cleanup, cix vs. a less-capable indexer
case-study-pre-release-cleanup.mdCodebase: Laravel 12 API + Vue 3 SPA, ~300 indexed files
Case study: the 25-repo benchmark
case-study-25-repo-benchmark.mdThis is the most important page on this site. If cix worked beautifully on a couple of demo projects but fell apart on real codebases, the demo projects wouldn't matter. So we ran it on twenty-five real codebases and scored every run on a fixed rubric.
Case study: Apache Airflow
case-study-airflow.mdAirflow is one of the most demanding projects in the 25-repo benchmark. It is a Python monorepo with:
Benchmarks
benchmarks.mdA condensed view of cix's measured performance across real-world projects. The full per-project breakdown is in the 25-repo benchmark case study; this page is the at-a-glance summary.
Honest limits
Honest limits
Where cix falls short. The most important pages on this site.
Limitations
limitations.mdEvery product page on this site tries to be useful. This one tries to be the most useful — because honest documentation of what doesn't work is what makes the rest of the documentation trustworthy.
Language support
language-support.mdA dedicated, current view of which languages and project shapes cix handles, broken into three honest tiers. If your project's primary language isn't on the Full list, read the rest of the page before deciding whether cix is worth your time.
Roadmap
roadmap.mdA sketch of what we're working on, organized by theme. We list themes rather than firm dates — direction matters more than promised quarters.
Compare
Compare
cix vs. the alternatives — grep, LSP, RAG, instruction files, fine-tuning, doing nothing.
By role
By role
The same product, framed for different buyers.
For engineering teams
for-engineering-teams.mdYou manage or lead an engineering team that uses AI coding assistants. You also approve the spend, or report on the budget that does. You like that productivity is up. You are uncomfortable with how inconsistent the output is — across people, across sessions, occasionally on the same task on consecutive days.
For evaluators
for-evaluators.mdYou are a technical decision-maker — staff engineer, principal, eng lead, or CTO at a smaller org — and you've been asked to assess whether cix is worth adopting on a real codebase. You want concrete evidence, a defensible evaluation method, and an honest read of the limits before committing budget or attention.
For solo developers
for-solo-developers.mdYou work alone on a handful of projects — a side project or two, a freelance client codebase, an indie product, a research repo. You use an AI coding assistant heavily. You've noticed it works great on small projects and badly on the bigger ones, and you suspect it's not the assistant's fault.
More
More
FAQ and other reference.