Supported LLMs

Site admins can configure Cody via modelConfiguration on a Sourcegraph Enterprise instance. See Model Configuration for more details.

Chat and Prompts

Cody supports a variety of cutting-edge large language models for use in chat and prompts, allowing you to select the best model for your use case.

ProviderModelStatusVision Support
AnthropicClaude Sonnet 4.5
AnthropicClaude Sonnet 4.5 with Thinking
AnthropicClaude Opus 4.5
AnthropicClaude Opus 4.5 with Thinking
AnthropicClaude Haiku 4.5
AnthropicClaude Haiku 4.5 with Thinking
GoogleGemini 2.0 Flash
GoogleGemini 2.5 Flash
GoogleGemini 2.0 Flash-Lite
GoogleGemini 2.5 Pro
GoogleGemini 3 Pro✅ (experimental)
GoogleGemini 3 Flash✅ (experimental)
OpenAIGPT-5.1
OpenAIGPT-5
OpenAIGPT-5 mini
OpenAIGPT-5 nano
OpenAIGPT-4o
OpenAIGPT-4.1
OpenAIGPT-4o-mini
OpenAIGPT-4.1-mini
OpenAIGPT-4.1-nano
OpenAIo3
OpenAIo4-mini

While Gemini models support vision capabilities, Cody clients do not currently support image uploads to Gemini models.

Autocomplete

Cody uses a set of models for autocomplete which are suited for the low latency use case.

ProviderModelStatus
AnthropicClaude Haiku 4.5
AnthropicClaude Haiku 4.5 with Thinking
Fireworks.aiStarCoder
Fireworks.aiDeepSeek V2 Lite Base
Fireworks.aiAutoEdits Fireworks Default✅ (experimental)
Fireworks.aiAutoedits DeepSeek Coder V2✅ (beta)
Fireworks.aiAutoedits Long Suggestion Default✅ (beta)
Fireworks.aiAutoedits DeepSeek Coder V2✅ (beta)
Fireworks.aiAutoedits DeepSeek Coder V2✅ (beta)
Fireworks.aiAutoedits Long Suggestion V4 Warm Start SFT✅ (beta)
MistralAutoEdits Mixtral V2✅ (experimental)
OpenAIGPT-4.1-nano

Smart Apply

ProviderModelStatus
Fireworks.aiSmart Apply Qwen Default
Fireworks.aiSmart Apply Qwen 32B V1✅ (experimental)

Default Models

The following models are used by default for each feature when no specific model is configured:

FeatureDefault Model
ChatClaude 3.7 Sonnet
AutocompleteDeepSeek V2 Lite Base
Fast ChatClaude 3.5 Haiku
Previous
Proxy Setup