Sourcegraph Model Provider

Learn how the default Sourcegraph LLM model provider enables AI features for Sourcegraph Enterprise customers.

The Sourcegraph Model Provider is the default and recommended way to configure AI features like Deep Search and Cody. Through this service, we provide zero-configuration access to state-of-the-art models from various LLM providers, including Anthropic and OpenAI, with enterprise-grade privacy and security.

The Sourcegraph Model Provider is also referred to as "Cody Gateway".

Using the Sourcegraph Model Provider

If you are a Sourcegraph Cloud customer, the Sourcegraph Model Provider is automatically configured by default. Other customers can verify their Enterprise subscription has access by confirming with their account manager.

To enable inference provided by the Sourcegraph Model Provider on your Sourcegraph Enterprise instance, all you need to do is ensure your license key and your model provider is set to "sourcegraph" in your site configuration:

JSONC
{ "licenseKey": "<...>", // Optional: Once the license key is added, default configuration and // authentication are automatically applied. "modelConfiguration": { "sourcegraph": {} } }

This feature is backed by a service hosted at cody-gateway.sourcegraph.com. To use the Sourcegraph Model Provider, your Sourcegraph instance must be allowed to connect to the service at this domain.

After setting up model configuration, you may need to take additional steps to enable Deep Search or Cody.

Rate limits and quotas

Rate limits and quotas are tied to your Sourcegraph Enterprise license for Sourcegraph Enterprise instances. All successful LLM requests will count toward your rate limits. Unsuccessful requests are not counted as usage.

In addition to the above, we may throttle concurrent requests per Sourcegraph Enterprise subscription to prevent excessive burst consumption.

You can reach out for more details about if Sourcegraph Model Provider access available to you and how you can gain access to higher rate limits, quotas, and/or model options.

Privacy and security

Sourcegraph's Enterprise AI Terms of Use apply to all usage of the Sourcegraph Model Provider:

  • Input and output ownership: You own all inputs (queries) and outputs (generated code/text) from AI features
  • Zero retention by LLM partners: Partner LLMs do not retain any input or output data beyond the time needed to generate responses. Your enterprise code is not used to train LLM models unless you explicitly enable finetuning features.
  • Data collection: Customer content (inputs, outputs, context) is collected solely to provide the service, not for product improvement. Only rate limit consumption and high-level diagnostic data (error codes, numeric parameters) are tracked
  • Security: All data is processed according to Sourcegraph's Security Exhibit
Previous
Prompts Guide