Improved context fetching from @-mentioned repos for Cody Enterprise Cloud
AI code assistants are only as good as the context they’re given, and Cody uses the context of your codebase as part of its prompt construction to deliver high quality responses to your chats (you can learn more about how Cody understands your codebase here). Cody allows users to @-mention specific repos, directories, and files in your prompts as context to provide greater precision and to increase the likelihood of higher quality responses, and in this release for Enterprise Cloud customers we’ve made improvements to how Cody handles and fetches context from @-mentioned repos.
High signal code snippets and files are now ranked higher and earlier to the LLM
When asking Cody a question and @-mentioning a repo, Cody searches the repo to determine what code snippets or files should be passed to the LLM as context. This output can be seen in the context
section of a chat in Cody, and is displayed in the order it is passed to the model.
In the example above we want to understand what chatcontroller
does but the key code snippets or files from chatcontroller.ts
are buried below less relevant markdown files. We’ve implemented a proprietary re-ranker that sorts relevant context items higher and earlier to the LLM, resulting in a higher quality response.
Support for more context from a file
In addition to an improved ranking of relevant files as context for a prompt, Cody now supports multiple snippets fetched from an individual file too. Previously this was limited to 1 snippet per file, and this change should improve the precision and quality of the context passed to an LLM.
These changes are only available to Cody Enterprise Cloud customers, and we will update you on future availability for Cody Enterprise customers with self-hosted deployments.