← All docs

docs / limitations.md

Limitations

Every product page on this site tries to be useful. This one tries to be the most useful — because honest documentation of what doesn't work is what makes the rest of the documentation trustworthy.

If we are not honest here, you should not trust us anywhere else.

Languages we don't fully support today

cix's static index relies on a parser per supported language. The current set is broad but not universal. We maintain a dedicated Language support page with full / partial / unsupported tiers — that page is the source of truth.

Lua is the most-discussed gap. Kong, the API gateway used as a benchmark project, is 97% Lua. Without a Lua parser, cix can list files and return path-shaped search results, but cannot return symbol-level data, schema views, or impact analysis. Benchmark scores on Lua-heavy projects sit in the 1–4/15 range as a result.

The system is honest about this. On a Lua project, the orientation step explicitly says "this codebase is mostly Lua, parsing is unsupported, expect to fall back." There is no scenario where cix returns a confident-looking but wrong answer on a Lua project — it tells you the limit.

Other unsupported languages today: Erlang, Elixir, Crystal, Zig, Lisp/Scheme dialects, Haskell, Scala. See Language support for the complete picture.

Dynamic registration patterns

Many real-world projects register routes, handlers, plugins, or even database tables programmatically at runtime — by iterating over a directory, reading a config, or executing initialization code that adds entries to a global registry.

Static analysis cannot fully see dynamic registration. cix captures the parts that are declared statically and explicitly flags partial coverage on the rest.

Concrete cases the benchmark surfaced:

  • Loki registers most of its primary HTTP API through a runtime module manager. The static portion of the route table comes through; the dynamic portion does not. The system surfaces what it can and notes the gap.
  • rclone registers its remote-control surface through a runtime registry function. Same pattern. Same partial coverage.
  • Kong registers Admin API endpoints through Lua tables that compile at startup. Compounded with the Lua parser gap, this is largely invisible.

What we do about it. The current strategy is honest reporting. The route view tells you what was found and notes what kind of registration patterns it doesn't see. The roadmap includes adapters for several common dynamic-registration patterns; expanding this is ongoing work.

What you should do about it. On projects with heavy dynamic registration, treat the route view as a partial answer rather than an exhaustive one. The schema, search, and convention-enforcement features are largely unaffected.

Same-name symbol disambiguation

When two unrelated types share a name across packages — for example, Object defined separately in two different libraries inside the same project — impact analysis sometimes resolves the query to one when the user meant the other. The output flags this as a confidence issue, but in time-pressed workflows it can be a real friction point.

The system surfaces this honestly: when a query for Object matches multiple definitions, you see the resolved target and the alternatives. The user has to confirm or pick. Tooling on top of this — automatic disambiguation prompts, package-qualified lookups by default — is on the roadmap.

Schema parsing has gaps

cix parses migrations from Laravel, Django, Alembic/SQLAlchemy, Prisma, and raw SQL. This covers the dominant patterns for web app development, but it is not exhaustive.

Where schema parsing falls short:

  • Custom migration runners. A team that has built its own migration framework will see partial or empty schema coverage. The system tells you when this happens.
  • Mutations that change rather than create. When a migration alters a column rather than creating it, the parser sometimes captures the alter and not the original create. The result is correct for the current shape but lacks creation metadata.
  • Postgres-specific or database-specific types. tsvector, JSONB schemas embedded in text columns, custom domains — surfaced as best-effort rather than fully parsed.
  • ORM-only fields. Fields declared on the model but not in any migration may not be picked up. Where ORM models exist, the system tries to fall back to model declarations.

The schema confidence tier surfaces all of this. When the schema view says "partial," it explains what's missing.

Test fixtures bleed into route lists

Some projects have extensive test infrastructure — mocks, fixtures, helper apps — that register routes for testing purposes. cix has no native concept of "this is test-only" beyond directory heuristics.

The visible effect. On large projects, the route list can include test-only routes alongside production routes. The user has to filter.

What we do about it. Directory-based heuristics handle the common cases (anything under test/, spec/, __tests__/, etc.). Cases that don't match these patterns leak through. We're working on better classification.

Convention enforcement varies by client

Convention enforcement — the part of cix that blocks a write that violates project rules — depends on the AI assistant exposing a write hook. This is well-supported in Claude Code; partial in others.

On clients with strong write hooks: rule violations get blocked at the write. The file is never created. The assistant sees the failure and corrects course.

On clients without: the same rules still exist as a machine-readable file. The validation queries still work — the assistant can ask "is this allowed?" before writing. But the assistant has to choose to ask, and a violation that slips through becomes a warning rather than a block.

In practice, the difference shows up over time. On Claude Code, conventions hold reliably. On clients with softer hooks, they hold most of the time, with occasional drift that PR review catches.

What cix does not even try to do

Some things are explicitly not in scope, and we want to be clear about them:

  • Code style. Use Prettier, Black, gofmt, or whatever your team uses. cix does not format code.
  • Test coverage. A different category of tool. cix doesn't run tests or measure coverage.
  • Bug detection. Static analysis tools (SonarQube, ESLint, Bandit) handle this. cix does not flag general bugs.
  • Type checking. Your language's type checker does this better than cix could.
  • Architectural judgment. Whether an abstraction should exist, whether two helpers should be merged, whether a particular pattern is the right one — these remain human calls.
  • Greenfield project structure. If you have no project yet, cix has nothing to index. Start the project, then add cix.

These are not bugs. They are deliberate boundaries. cix does one thing — give your AI assistant a real index of your project — and tries to do that thing well. Everything else is somebody else's job.

What it does not fail at, despite claims you might worry about

We are sometimes asked whether cix:

  • Sends your code to a third party. No. The index is local. The assistant queries it locally. Optional shared-server mode runs on your infrastructure.
  • Locks you into a specific AI assistant. No. The integration standard is open; the index is portable.
  • Makes your assistant slower. No. Index queries are fast (single-digit milliseconds for most lookups). The token savings on real tasks more than offset any per-query overhead.
  • Requires specific repository structure. No. cix adapts to your project's structure. The convention rules are inferred and tunable, not imposed.

We mention these because they are common concerns and the answers are concrete.

How to read these limitations

A tool that hides its limitations is not trustworthy. A tool that documents them is making a bet that you'd rather know.

The bet: most teams adopting cix are working in supported stacks where the system performs well, and the limitations above don't bind. For those teams, the gains are real and immediate. For teams in unsupported stacks or with unusual project shapes, the value depends on which limitations apply — and the documentation here exists to let you decide before you invest.

Related

  • Roadmap — what we are working on, including the items above.
  • 25-repo benchmark — the data behind several of these limitations.
  • Positioning — what cix is and is not, in product terms.