Why cix exists
The honest version of what AI coding assistants do today: they generate plausible code that may or may not fit the project they are working in.
The reason is structural. A coding assistant's training data is the public internet. Your codebase is not on the public internet. The assistant has to either guess or read its way to context one file at a time, and it does both — sometimes well, often not.
This page explains the gap and why a code index is the right way to close it.
The two things an assistant doesn't know
When an assistant joins your project for the first turn of a session, it knows:
- How code looks in general.
- The framework you're using, broadly.
- Common patterns from training data.
It does not know:
- What already exists. Every helper, component, hook, utility, and shared type in your project. The assistant cannot reuse what it cannot see.
- How things are done here. Your folder structure. Your naming conventions. The language you write UI strings in. Your team's rules for when to extract versus inline. Whether tests live next to source or in a sibling tree.
Both gaps create the same outcome: code that is technically correct but project-inconsistent. And both gaps widen as projects grow.
What goes wrong without a real index
Watch any AI assistant work for a week without project context. The pattern repeats:
Duplication. It writes a formatCurrency helper that already exists three folders away as currencyFormat. Six months later your codebase has both, and a third one will appear next quarter.
Wrong-folder placement. Your team agreed that pure functions live in lib/ and React components live in components/. The assistant doesn't know that. It puts a hook in lib/ because the file name starts with use, and now your import lint rules don't catch it because the assistant disabled them when it failed.
Invented schema. It writes User.find_by_email(email) against a model that uses find_by_email_address. The query compiles. The test fails at runtime. You spend twenty minutes debugging an assistant-introduced typo.
Read-everything debugging. You ask for a small change and the assistant reads forty files to understand the context. The session burns through tokens and ends with a change you could have described in two sentences if it had asked the right questions.
Convention drift. Each session starts fresh. The assistant has no memory of "we don't do that here." Decisions made last month evaporate. The codebase slowly accumulates parallel patterns, dead helpers, and inconsistent naming.
Why the obvious fixes don't work
Fine-tuning the model on your code. Looks attractive. Doesn't survive contact with reality. Your codebase changes daily. A trained snapshot ages out the moment a refactor lands. And you are now paying to keep a private model up to date.
Adding a giant CLAUDE.md. Helps a little. Has hard limits. A document read once at session start can describe rules but cannot enforce them, cannot answer "does X exist?", and cannot stay accurate as the project evolves. Static docs decay.
Letting the assistant grep. Works for trivial questions. Fails at scale. Grep has no concept of symbols, no idea which match is a definition versus a comment, no notion of which route belongs to which framework. It also burns shell calls — a single "where is this used?" question can cost twenty file reads and still miss matches.
Hoping the model figures it out. This is the default state. It produces the failures above, every day, on every project, in every session.
The mental model
Treat your project as a structured artifact, not a folder of text. Your code has:
- Symbols — functions, classes, components — with definitions and call sites.
- Routes — HTTP endpoints with methods, paths, and handlers.
- Schema — tables and columns that real queries depend on.
- Imports — explicit edges between files.
- Conventions — rules about where things live and what they're called.
A general-purpose assistant has none of that as structured data. It has text.
cix turns the project back into structured data and gives the assistant a way to query it. The assistant can answer "does this exist?" with a lookup instead of a guess. It can answer "what columns does this table have?" with a fact instead of a hallucination. It can answer "what would break if I change this?" with a real analysis instead of a vague worry.
The narrow claim
cix does not make AI assistants smart. They are already smart.
cix makes them grounded. The work an assistant does inside a project with cix installed is consistently more aligned with what already exists, more respectful of project structure, and more honest about what it doesn't know.
That alone removes a class of small failures that compound silently over time — duplication, drift, invented details, read-everything debugging — and it does so without changing how the assistant works at the core.
That is the entire pitch. Everything else on this site is a consequence of getting that one thing right.
See it in practice
- How it works — the architecture in plain language.
- Pre-release cleanup case study — same prompt, two passes, side-by-side.
- 25-repo benchmark — measured performance across real projects.