Project orientation
The first turn of any AI-assisted coding session sets the tone for everything that follows. If the assistant doesn't understand the basics — what framework you're using, where the code lives, what the major directories are — it spends the rest of the session compensating with file reads, guesses, and clarifying questions.
Project orientation collapses that first-turn discovery into a single query.
What it does
A single call returns a structured snapshot of the project:
- Stack. The primary language, framework, ORM, frontend tooling, and CSS approach. Detected from migrations, package files, and entry-point patterns.
- Entry points. Where the application starts running. Web servers, CLI commands, background workers, runtime bootstraps.
- Request surfaces. A summary of HTTP routes (with samples), GraphQL handlers if present, and any other request entry points.
- Top directories. The major folders, ranked by indexed file count, so the assistant knows where the substance lives versus where the noise is.
- Schema overview. A list of indexed database tables.
- Hot symbols. The most-imported and most-referenced functions and classes, so the assistant has a sense of what's load-bearing.
- Index health. What was parsed, what wasn't, what languages are unsupported, what files had errors.
- Confidence signals. Whether the orientation is high, medium, or partial — and the specific reasons.
This is the answer to "tell me about this project" delivered in a structured form, all at once.
Why this matters
Onboarding is a real cost. When a new developer joins a project, they spend their first day or two figuring out the layout. AI assistants do this implicitly every session — and pay for it in tokens, latency, and irrelevant file reads.
The assistant's first guesses set the tone. If the orientation step misidentifies the framework, every subsequent decision is built on a wrong foundation. Get the orientation right, and the rest of the session is dramatically more useful.
You see what the system sees. The orientation output is also a diagnostic. If the assistant thinks your repo is a Python project but it's actually a TypeScript monorepo, the orientation will say so — and you know to investigate why before wasting time.
What it surfaces that grep cannot
- Framework detection based on actual configuration files and runtime signals, not directory name guesses.
- Entry point classification — distinguishing a runtime bootstrap from a test harness from a build helper.
- Hot symbol ranking — knowing which
Serviceclass is the central one of forty. - Index health attribution — which files failed to parse, which languages aren't supported, where the gaps are.
A grep tells you what files exist. Orientation tells you what the project is.
When the assistant uses it
Every session, on the first turn. The system instructions encourage this explicitly. The assistant calls orientation once, gets the lay of the land, and uses what it learned to ground every subsequent question.
It is also worth a manual call when:
- You join a new project and want a one-page summary.
- You suspect the assistant is operating with stale or wrong context.
- You're evaluating whether cix is correctly identifying your project's stack.
Honest behavior
Orientation surfaces whatever the index can confidently report — and clearly flags what it cannot. If your project is in a language without a parser, the orientation will say so. If most files failed to parse, it will tell you. If the framework detector isn't sure, it returns a confidence note rather than a guess.
This means the orientation is sometimes blunt. On a project where cix doesn't have full support, the orientation might say "language detection: low confidence; only 2% of files parsed; the schema view is empty; here are the known gaps." That is the right answer in that case. Better to know than to be misled.
In a 25-repo benchmark
When measured across 25 real-world projects spanning Python, Go, Java, C#, TypeScript, PHP, and Ruby, orientation gave high-confidence, accurate readings on standard backend stacks (Flask, FastAPI, Django, Spring Boot, ASP.NET Core, Laravel) and medium-confidence readings on large or unconventional projects (Apache Airflow, Loki, NetBox).
It was honest about its weak spots: on Lua-heavy projects (Kong), it labeled the language detection as low-confidence and explicitly attributed the gap to a missing parser. The orientation didn't pretend to know what it didn't.
See the 25-repo benchmark for the full breakdown.
Related
- How it works — the architecture orientation queries against.
- Feature: find before you build — the natural follow-up question after orientation.
- Limitations — when orientation underperforms.