← All docs

docs / workflow-refactoring.md

Workflow: refactoring with confidence

Renames, signature changes, module moves — refactoring is the work AI assistants are most prone to doing badly without context. The change itself is small. The blast radius is what matters. And without an index, the assistant has no efficient way to see the blast radius.

cix changes that. Refactoring with the index in the loop is a structured exercise: see what depends on the target, plan the change, execute it, verify nothing slipped.

The trap

A typical refactor request: "Rename getUserById to findUserById for consistency with the rest of the codebase."

Without an index, the assistant does one of three things:

  1. Renames the definition only. Half the call sites still reference the old name. Everything compiles in TypeScript with strict mode off. Things break at runtime.
  2. Greps for the name and renames every match. Catches the call sites it found. Misses the dynamic ones, the ones in tests, the ones in a comment that someone built tooling against.
  3. Reads files until it gets nervous and asks you what to do. Burns tokens, makes you think, doesn't finish the work.

None of these are good. All of them are common.

The cix shape

With the index, the same refactor follows a structured sequence:

Step 1: See the blast radius

A single impact analysis query returns:

  • Every direct caller of getUserById, with file and line.
  • Transitive callers, up to a depth you choose.
  • Any HTTP route or GraphQL field whose handler reaches this function.
  • Tests that exercise it.
  • A risk tier — "medium, twelve direct callers, three of which are exported from a public package."

You see the surface area before changing anything.

Step 2: Plan deliberately

Now you can decide:

  • Is this a safe rename, or does it cross a public boundary? (The risk tier tells you.)
  • Should the rename happen in one PR or several? (The caller count tells you.)
  • Are there compatibility concerns — re-exports, dynamic dispatch, public APIs? (The impact report flags lower-confidence callers separately.)

Step 3: Execute

The assistant performs the rename across every identified caller, with file/line precision. No grep guesswork. No "I think I got them all."

Step 4: Verify

A second impact analysis query confirms zero remaining references to the old name. If there are stragglers — say, in a string literal that the rename pass deliberately skipped — they show up here, and you decide whether to handle them.

Why this is different

The plan is data, not intuition. You are not relying on the assistant's confidence. You are relying on a structured analysis of the actual code. The plan is auditable. You can read it before you accept it.

The risk is visible. A rename with three callers and no exports is different from a rename with twenty callers and a public API. Both look the same in the diff. Only the impact analysis tells you which one you're actually doing.

Verification is fast. "Did we miss anything?" goes from a sweep through the codebase to a single query. If the system says zero remaining references and you trust the index is current, you're done.

What about scarier refactors?

The same pattern scales up. A signature change has a slightly larger plan: every caller has to be updated, not just renamed. A module move has a different plan: imports need to update, but the function bodies don't. A column drop is bigger still: every model, query, serializer, and migration touch point shows up in the impact report.

The shape stays the same. The scale changes. And — importantly — the assistant becomes useful as a collaborator on bigger refactors than it would otherwise be safe for. The combination of impact analysis plus convention enforcement plus a shared index is what lets you say "yes, do this" to a multi-file refactor instead of breaking it up into manual chunks.

What it doesn't replace

  • Tests. Static impact analysis tells you what touches the code path. It does not tell you the code is correct after the change. You still need tests.
  • Code review. Impact reports inform reviews; they do not replace them.
  • Domain judgment. The system can tell you which functions reference a column. It cannot tell you whether the rename you're proposing is the right one.

These are not gaps. They are the parts of refactoring that should remain with humans.

A real example

In the pre-release cleanup case study, the assistant used impact analysis to find that an entire controller was querying a table that had been dropped one migration earlier. The grep-based pass found the same issue. But cix's impact view also surfaced which endpoints were affected and which tests would fail — turning a "we should fix this" finding into a clear, actionable cleanup task with a known scope.

That is the value: not just finding the problem, but seeing its shape immediately.

Related