← All docs

docs / roadmap.md

Roadmap

A sketch of what we're working on, organized by theme. We list themes rather than firm dates — direction matters more than promised quarters.

This page is a snapshot. The honest status is that the project is moving quickly, and what's listed here is what we are committed to delivering, not the exhaustive list of every consideration.

Closing the language coverage gap

The single highest-leverage improvement on the roadmap. Several projects in the benchmark scored low because cix has no parser for their primary language.

In progress:

  • Lua parser support, with adapters for Kong-style projects (Lua-table entity schemas, embedded SQL in migrations, Lapis-style route registration).
  • Improved coverage for embedded SQL in non-standard migration runners.

Likely next:

  • Wider language coverage based on user demand. Erlang, Elixir, and Crystal have surfaced as candidates.

The benchmark data suggests Lua support alone would lift Kong-class projects from 1–4 out of 15 to roughly 11–13 out of 15.

Better dynamic-registration coverage

Projects that register routes, handlers, or tables programmatically are partially visible to the static index today. We are expanding the set of dynamic patterns the system recognizes.

In progress:

  • Adapters for common Go-ecosystem dynamic-handler patterns (the kind used by Loki, rclone, and similar projects).
  • Improved detection of plugin-host registration patterns (Kong, similar gateways).
  • Better classification of test-only versus production routes.

The goal: lift the partial-coverage projects from the 9–11/15 score band toward 13/15, where the full-coverage projects sit.

Stronger same-name symbol resolution

Same-name symbols across packages occasionally cause impact analysis to resolve to the wrong target. We are extending the resolver to:

  • Disambiguate by kind (struct vs. constant vs. interface).
  • Default to package-qualified queries when ambiguity is detected.
  • Surface alternatives explicitly when a query matches multiple definitions.

Convention enforcement parity across clients

Convention enforcement is strongest on Claude Code today because it has the most mature write-hook surface. We are tracking similar capabilities in other clients and will lift parity as those surfaces become available.

In the meantime, the validation queries work on every supported client; what varies is whether a violation becomes a warning or a block.

Schema parsing improvements

The schema view covers the dominant migration patterns but has known partial-coverage cases:

  • Migrations that mutate rather than create (column-level metadata sometimes incomplete).
  • Custom migration runners (no coverage).
  • Postgres-specific types and database extensions (best-effort).

We are gradually broadening parser coverage and improving the partial-coverage signals so the user knows exactly where the gaps are.

Team-scale features

For teams running cix at scale across many repos and many developers, we're investing in:

  • Shared-index mode improvements (better caching, faster sync, more granular access controls).
  • Cross-project search for monorepo-of-monorepos environments.
  • Audit history — track convention drift over time, not just current state.
  • Reporting tools for engineering leadership (where is the codebase drifting? where is enforcement working?).

Honest behavior signal expansion

The system already surfaces partial coverage through confidence tiers, parse-error reports, and "no parser available" responses. We're broadening this:

  • Per-tool confidence tiers (some queries are more reliable than others on the same project).
  • More specific gap reasons (instead of "partial schema," explain which tables are partial and why).
  • Better surfacing of the gaps to the user, not just to the AI assistant.

What we are not promising

  • A specific date for any of the above.
  • That every gap in Limitations will be closed. Some are inherent to static analysis and will remain.
  • That feature parity across AI assistants will arrive simultaneously. We work with what each client supports; some are ahead of others.

How to influence the roadmap

We listen most to:

  1. Real-world failure reports — "I tried cix on this project and it didn't surface X." Specific, reproducible reports get prioritized.
  2. Adoption blockers — "My team would adopt this if Y worked." We weigh these heavily, especially when they recur across teams.
  3. Benchmark performance — projects that score low because of fixable gaps get attention proportional to how widely the gap affects users.

Less weighty:

  • Speculative feature requests without a concrete project behind them.
  • Requests that conflict with our positioning ("can it generate code for me?").

The roadmap is opinionated. We say no to things that don't fit the product, and we say yes to things that close real gaps surfaced by real users.

Related