Today, we announce the general availability of Cody 1.0, a new AI coding assistant that uses intelligent code context to answer technical questions, generate code, and suggest completions in your editor.
Unlike traditional AI coding tools that have only simple local context, Cody searches and navigates your code to integrate relevant context into its responses, similar to how a savvy human developer would (but of course at robot speeds).
Cody's context engine also differs from agent-based context fetchers in some other assistants, which employ a sequence of decisions orchestrated by a language model. The issue we've found with this approach is that it compounds the latency and randomness of multiple serial inference requests, leading to poor and unreliable context quality.
The Cody context engine makes use of the institutional knowledge we've acquired over the past decade at Sourcegraph building fast and deep code context for developers. Indeed, the name "Sourcegraph" is a reference to our knowledge graph of code, which provides crucial context for both developers and AI code assistants alike. We'll cover more of the technical details of our approach below, but first, an overview of Cody's features.
Cody is available as an editor extension for VS Code (GA), JetBrains (beta), and Neovim (experimental). It provides the following in your editor:
Code completions as you type using a context-enhanced open-source LLM (StarCoder)
Context-aware chat that provides the option of using GPT-4 Turbo, Claude 2, GPT-3.5 Turbo, Claude Instant...and now Mixtral-8x7B!
Doc and unit test generation, along with AI quick fixes for common coding errors
AI-enhanced natural language code search
Cody's chat can also be enabled for use in Code Search.
With this GA announcement, Cody is officially ready for both personal and professional coding. Cody is available in two tiers: Cody Free and Cody Pro. Cody Free is identical to Cody Pro but has rate limits on use. Cody Pro will be free through the holidays until February 2024, after which it will be $9/month. A plan with enterprise-level features and context will be coming soon.
Start using Cody for free today. If you’re already using Cody, make sure to update to the latest extension version to get the latest features.
Context-enhanced code completions
Cody provides fast single and multi-line completions.
Cody's code completions differ from standard AI code completions through the use of an open source LLM enhanced with Cody's context engine. Specifically, we've found the type of context that helps a lot with completions is graph context. Graph context takes advantage of Sourcegraph’s expertise in parsing code and producing a graph that captures semantic understanding of your code. Pulling in this semantic code graph into context reduces the rate of common LLM hallucinations like type errors and imaginary function names.
By using the open source StarCoder LLM, we've been able to make completion latency very low. It also gives us access to signals such as token-level log probabilities, which we can incorporate into end-user signals.
The combination of these qualities allows us to hit a Completion Acceptance Rate of 30% or higher, depending on the scenario. And we’re just getting started. Upcoming versions of Cody will integrate more of Sourcegraph's universal code graph into the context engine. Users of Cody will see these improvements in the coming weeks and months.
As just one example of the work we’re doing to continue to improve quality, Cody will soon use deeper graph context to make autocomplete suggestions based on symbols defined elsewhere in your codebase. Here you can see the results of our internal testing of Cody (with deeper graph context) next to Copilot. Cody is able to reference the JavaScript Person interface to determine what variables to pass into the personMessage function.
Subscribe for the latest code AI news and product updates