Cody Context
Understand how context helps Cody write more accurate code.
Context refers to any additional information provided to help Cody understand and write code relevant to your codebase. While LLMs have extensive knowledge, they lack context about an individual or organization's codebase. Cody's ability to provide context-aware code responses is what sets it apart.
Why is context important?
Context and methods of retrieving context are crucial to the quality and accuracy of AI. Cody relies on its ability to retrieve context from user codebases to provide reliable and accurate answers to developers’ questions. When Cody has access to the most relevant context about your codebase, it can:
- Answer questions about your codebase
- Produce unit tests and docs
- Generate code that aligns with the libraries and style of your codebase
- Significantly reduce your work that's required to translate LLM-provided answers into actionable value for your users
Context sources
Cody uses a variety of sources to retrieve context relevant to the user input. These sources include:
- Keyword Search: A traditional text-based search method that finds keywords matching the user input. When needed, queries are automatically rewritten to include more relevant terms.
- Sourcegraph Search: The powerful native Sourcegraph Search API. Queries are sent to the SG instance (managed or self-hosted), and search is done using the SG search stack. Relevant documents are returned to the user IDE for use by the LLM.
- Code Graph: Analyzing the structure of the code, Cody examines how components are interconnected and used, finding context based on code elements' relationships.
All these methods collectively ensure Cody's ability to provide relevant and high-quality context to enhance your coding experience.
Cody context fetching features
Cody uses @-mentions to retrieve context from your codebase. Inside the chat window, there is an @
icon that you can click to select a context source. Alternatively, you can press @
to open the context picker.
Based on your Cody tier, you can @-mention the following:
Tier | Client | Files | Symbols | Web URLs | Remote Files/Directories | OpenCtx |
---|---|---|---|---|---|---|
Free/Pro | VS Code | ✅ | ✅ | ✅ | ❌ | ✅ |
JetBrains | ✅ | ❌ | ✅ | ❌ | ❌ | |
Visual Studio | ✅ | ✅ | ✅ | ❌ | ❌ | |
Cody Web | ✅ | ✅ | ✅ | ❌ | ❌ | |
Enterprise | VS Code | ✅ | ✅ | ✅ | ✅ | ✅ |
JetBrains | ✅ | ❌ | ✅ | ✅ | ❌ | |
Visual Studio | ✅ | ✅ | ✅ | ✅ | ✅ | |
Cody Web | ✅ | ✅ | ✅ | ✅ | ❌ |
Repo-based context
Cody supports repo-based context. You can link single or multiple repositories based on your tier. Here's a detailed breakdown of the number of repositories supported by each client for Cody Free, Pro, and Enterprise users:
Tier | Client | Repositories |
---|---|---|
Free/Pro | VS Code | 1 |
JetBrains | 1 | |
Visual Studio | 1 | |
Enterprise | Cody Web | Multi |
VS Code | Multi | |
JetBrains | Multi | |
Visual Studio | Multi |
How does context work with Cody prompts?
Cody works in conjunction with an LLM to provide codebase-aware answers. The LLM is a machine learning model that generates text in response to natural language prompts. However, the LLM needs to inherently understand your codebase or specific coding requirements. Cody bridges this gap by generating context-aware prompts.
A typical prompt has three parts:
- Prefix: An optional description of the desired output, often derived from predefined Prompts that specify tasks the LLM can perform
- User input: The information provided, including your code query or request
- Context: Additional information that helps the LLM provide a relevant answer based on your specific codebase
Impact of context LLM vs Cody
When the same prompt is sent to a standard LLM, the response may need more specifics about your codebase. In contrast, Cody augments the prompt with context from relevant code snippets, making the answer far more specific to your codebase. This difference underscores the importance of context in Cody's functionality.
Manage Cody context window size
While Cody aims to provide maximum context for each prompt, there are limits to ensure efficiency. For more details, see our docs on token limits.
Site administrators can update the maximum context window size to meet their specific requirements. While using fewer tokens is a cost-saving solution, it can also cause errors. For example, using an edit
or describe
type prompts with a too small context window size might give you errors like You've selected too much code
.
Using more tokens usually produces higher-quality responses but also increases response times. In general, it's recommended not to modify the token limit. However, if needed, you can set it to a value that should not compromise quality or generate errors.