Agentic chat

Learn about the agentic chat experience, an exclusive chat-based AI agent with enhanced capabilities.

Agentic chat (available in version 6.0) is currently in the Experimental stage for Cody Pro and Enterprise and is supported on VS Code, JetBrains, Visual Studio editor extensions and Web. Usage may be limited at this stage.

Cody's agentic chat experience is an AI agent that can evaluate context and fetch any additional context (OpenCtx, terminal, etc.) by providing enhanced, context-aware chat capabilities. It extends Cody's functionality by proactively understanding your coding environment and gathering relevant information based on your requests before responding. These features help you get noticeably higher-quality responses.

This agentic chat experience aims to reduce the learning curve associated with traditional coding assistants by minimizing users' need to provide context manually. It achieves this through agentic context retrieval, where the AI autonomously gathers and analyzes context before generating a response.

Capabilities of agentic chat

The agentic chat experience leverages several key capabilities, including:

  • Proactive context gathering: Automatically gathers relevant context from your codebase, project structure, and current task
  • Agentic context reflection: Review the gathered context to ensure it is comprehensive and relevant to your query
  • Iterative context improvement: Performs multiple review loops to refine the context and ensure a thorough understanding
  • Enhanced response accuracy: Leverages comprehensive context to provide more accurate and relevant responses, reducing the risk of hallucinations

What can agentic chat do?

Agentic chat can help you with the following:

Tool Usage

It has access to a suite of tools for retrieving relevant context. These tools include:

  • Code Search: Performs code searches
  • Codebase File: Retrieves the full content from a file in your codebase
  • Terminal: Executes shell commands in your terminal
  • Web Browser: Searches the web for live context
  • OpenCtx: Any OpenCtx providers could be used by the agent

It integrates seamlessly with external services, such as web content retrieval and issue tracking systems, using OpenCtx providers. To learn more, read the OpenCtx docs.

Terminal access is not supported on the Web. It currently only works with VS Code, JetBrains, and Visual Studio editor extensions.

Terminal access

Agentic chat can use the CLI Tool to request the execution of shell commands to gather context from your terminal. Its ability to execute terminal commands enhances its context-gathering capabilities. However, it’s essential to understand that any information accessible via your terminal could potentially be shared with the LLM. It's recommended not to request information that you don't want to share. Here's what you should consider:

  • Requires user consent: Agentic chat will pause and ask for permission each time before executing any shell command.
  • Trusted workspaces only: Commands can only be executed within trusted workspaces with a valid shell
  • Potential data sharing: Any terminal-accessible information may be shared with the LLM

Commands are generated by the agent/LLM based on your request. Avoid asking it to execute destructive commands.

Use cases

Agentic chat can be helpful to assist you with a wide range of tasks, including:

  • Improved response quality: Helps you get better and more accurate responses than other LLMs, making up for the additional processing time for context gathering a non-issue
  • Error resolution: It can automatically identify error sources and suggest fixes by analyzing error logs
  • Better unit tests: Automatically includes imports and other missing contexts to generate better unit tests

Enable agentic chat

Getting agentic chat access for Pro users

Pro users can find the agentic chat option in the LLM selector drop-down.

agentic chat interface

Getting agentic chat access for Enterprise customers

Enterprise customers must opt-in to access this agentic chat feature (reach out to your account team for access).

For the experimental release, agentic chat is specifically limited to using Claude Haiku for the reflection steps and Claude Sonnet for the final response to provide a good balance between quality and latency. Therefore, your enterprise instance must have access to both Claude Sonnet and Claude Haiku to use agentic chat. We use the latest versions of these models, and can fall back to older versions when necessary. These models may be changed during the experimental phase to optimize for quality and/or latency.

Additionally, enterprise users need to upgrade their supported client (VS Code, JetBrains, and Visual Studio) to the latest version of the plugin by enabling the following feature flags on their Sourcegraph Instance:

  • agentic-chat-experimental to get access to the feature
  • agentic-chat-cli-tool-experimental to allow terminal access
Previous
Query Types