← All docs

docs / faq.md

FAQ

Questions we get often, in plain language.

What does cix do, in one sentence?

It builds a queryable index of your codebase and gives your AI coding assistant the tools to use it — so the assistant searches before writing, references real schema instead of inventing it, and respects your project's structural rules.

Is this a coding assistant?

No. cix runs alongside an existing AI coding assistant (Claude Code, Codex, or Gemini CLI). It gives that assistant better context. The assistant still writes the code; cix makes sure the assistant is grounded in your actual project while it does.

Does my code leave my machine?

No. cix parses your code locally — your source never crosses the wire. Only structural metadata (symbols, signatures, call graphs) flows to cix-cloud, where your private per-user index lives. Each user gets a separate index, even within Team accounts.

Which AI assistants does it work with?

Claude Code, Codex, and Gemini CLI today. The integration uses the Model Context Protocol — an open standard several AI tools have adopted — so additional assistants will be straightforward to support as they adopt the protocol.

Convention enforcement (the part that blocks writes that violate project rules) is strongest on Claude Code today because it has the most mature write-hook surface. The other clients get the index, the conventions, and the queries — with softer enforcement on the write side.

What languages does it support?

Full support: Python, JavaScript, TypeScript, Java, C#, PHP, Ruby, Go, plus migration formats (Laravel, Django, Alembic/SQLAlchemy, Prisma, raw SQL).

Partial coverage: specific patterns within supported languages (dynamic route registration, custom migration runners, same-name symbol disambiguation). The system flags partial coverage explicitly.

Unsupported today: Lua, Erlang, Elixir, Crystal, Zig, Haskell, Scala, Lisp/Scheme dialects. The orientation step says so explicitly rather than producing confident-looking nonsense.

See the full breakdown — including which patterns lose coverage and why — on the Language support page.

How long does setup take?

A few minutes per machine, a few minutes per project. Two commands total for the basic case: one to install on the machine, one to initialize on a project.

What's the disk footprint?

Small. The index is a single database file per project, typically a few megabytes. Indexes are kept current incrementally — no full re-index on every change.

Do I have to commit anything to the repo?

A small machine-readable conventions file and an instruction file (CLAUDE.md, AGENTS.md, or both) get written to your repo when you initialize. These are intentionally meant to be reviewed and committed alongside your code. The index itself lives outside the repo.

What if my project has weird conventions?

The initialization step infers a starting set of conventions from what's actually in your project. You review them before committing. They get tuned over time the way any convention document does. A team with intentionally weird conventions can encode them and have them enforced; the system doesn't impose a particular structure.

Will cix slow down my AI assistant?

No, in our experience. Index queries are fast — single-digit milliseconds for most lookups. The token savings on real tasks (often 50–70% in measured runs) more than offset any per-query overhead.

What happens if cix fails or is offline?

The assistant falls back to its baseline behavior — i.e., what it does today without an index. There is no scenario where cix being unavailable degrades the assistant below its no-index baseline. The worst case is "we lose the gains," not "we lose the floor."

Is there a license or pricing model?

This is a public docs site so we don't put pricing here. The general framing: the core capability is local, runs on your machine, and is incrementally adoptable (a single developer can use it on a single project). For exact commercial details, see the project README or contact us directly.

How does this compare to grep, an LSP, RAG, or fine-tuning?

Each comparison gets its own section on the Comparison page. The short version:

  • Grep: keep it for human use; cix replaces the AI assistant's use of grep with something better-suited.
  • LSP: keep it for editor use; cix is what your AI assistant needs.
  • RAG: cix likely replaces it for code-shaped questions, with stronger guarantees and less noise.
  • Fine-tuning: cix likely replaces it for most teams; grounding is cheaper and more transparent.

What about my CLAUDE.md / AGENTS.md / GEMINI.md?

You keep it. It describes intent and culture. cix handles the factual layer underneath. The two complement each other. In practice, projects with cix installed often end up with shorter instruction files because cix takes over the "describe the project structure" responsibility.

What if my project is in a language cix doesn't support?

You'll see "language detection: low confidence" or "no parser available" in the orientation. The system is explicit about it. Some features (search by file path, file listing) still work; symbol-level features and the schema view will be empty.

This is a real constraint; we do not paper over it. If your codebase is primarily in an unsupported language, defer adoption until parser support lands. See Roadmap for what's coming.

Does it work on monorepos?

Yes, and well — Apache Airflow scored 13/15 in the benchmark. The system identifies workspaces, surfaces per-workspace structure, and handles per-package conventions. Monorepos with conventional structure benefit dramatically because the orientation step compresses what would otherwise be a long discovery process.

Does it work on greenfield projects?

It works, but the value is smaller. The benefits of cix come from the assistant being grounded in what already exists and respecting established conventions. A greenfield project has neither yet. You can use cix to start establishing them, but the immediate payoff is smaller than on a mature codebase.

What if my team uses different AI assistants?

That's a design point for cix, not a problem. One install configures Claude Code, Codex, and Gemini CLI to share the same index and conventions. Different team members can use different assistants without forking the team's context.

How do I evaluate it?

Start with one project, one developer, one task. Measure file reads and token usage with and without cix. Read the 25-repo benchmark and pre-release cleanup case study for what other people have measured. The For evaluators page lays out a recommended evaluation timeline.

What do you not do?

A long list, and we are explicit about it on Limitations. Briefly: we are not a coding assistant, not a code review tool, not a test runner, not a formatter, not a bug detector, not a type checker. We do one thing — give your AI assistant a real index of your project — and try to do that thing well.

Where can I find more?