This page displays the release plan of all Sourcegraph versions, including deprecation schedules, links to relevant release notes, and installation links.
Sourcegraph releases new features and bug fixes via feature and patch releases regularly. We support the two most recent major releases of Sourcegraph ([more details here](https://www.notion.so/Sourcegraph-Releases-eee2a5384b0a4555adb51b439ddde35f?pvs=21)). See the link to the corresponding release notes for more information on each release. For more information about the release process, please see our [Sourcegraph releases process documentation](https://www.notion.so/Sourcegraph-Releases-eee2a5384b0a4555adb51b439ddde35f?pvs=21). ## Supported Releases Currently supported versions of Sourcegraph: | **Release** | **General Availability Date** | **Supported** | **Release Notes** | **Install** | |--------------|-------------------------------|---------------|--------------------------------------------------------------------|------------------------------------------------------| | 6.1 Patch 4 | March 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v614020) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.1 Patch 3 | March 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v612889) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.1 Patch 2 | February 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v611295) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.1 Patch 1 | February 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v61376) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.1 Patch 0 | February 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v610) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.0 Patch 1 | February 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v602687) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 6.0 Patch 0 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v600) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.11 Patch 5 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5116271) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.11 Patch 4 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5114013) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.11 Patch 3 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5114013) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.11 Patch 2 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5113601) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.11 Patch 1 | January 2025 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5112732) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.11 Patch 0 | December 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5110) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.10 Patch 3 | December 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5103940) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.10 Patch 2 | December 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5102832) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.10 Patch 1 | December 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5101164) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.10 Patch 0 | November 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5100) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.9 Patch 3 | November 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v591590) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.9 Patch 2 | November 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v59347) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.9 Patch 1 | November 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5945) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.9 Patch 0 | October 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v590) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.8 Patch 1 | October 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v581579) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.8 Patch 0 | October 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v580) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.7 Patch 1 | September 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v572474) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.7 Patch 0 | September 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v570) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.6 Patch 2 | August 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v562535) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.6 Patch 1 | August 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v56185) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.6 | August 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v560) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.5 | July 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v553956) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.4 | May 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v547765) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.3 | February 2024 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v5312303) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.2 | October 2023 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v527) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.1 | June 2023 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v519) | [Install](https://sourcegraph.com/docs/admin/deploy) | | 5.0 | March 2023 | ✅ | [Notes](https://sourcegraph.com/docs/technical-changelog#v506) | [Install](https://sourcegraph.com/docs/admin/deploy) | ## Deprecated Releases These versions fall outside the release lifecycle and are not supported anymore: | **Release** | **General Availability Date** | **Supported** | **Release Notes** | |-------------|-------------------------------|---------------|-------------------------------------------------------------------------------------------------| | 4.5 | February 2023 | ❌ | [Notes](https://sourcegraph.com/docs/technical-changelog#v451) | | 4.4 | January 2023 | ❌ | [Notes](https://sourcegraph.com/docs/technical-changelog#v442) | | 4.3 | December 2022 | ❌ | [Notes](https://sourcegraph.com/docs/technical-changelog#v431) | | 4.2 | November 2022 | ❌ | [Notes](https://sourcegraph.com/docs/technical-changelog#v421) | | 4.1 | October 2022 | ❌ | [Notes](https://sourcegraph.com/docs/technical-changelog#v413) | | 4.0 | September 2022 | ❌ | [Notes](https://sourcegraph.com/docs/technical-changelog#v401) | | 3.43 | August 2022 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3432) | | 3.42 | July 2022 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3422) | | 3.41 | June 2022 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3422) | | 3.40 | May 2022 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3402) | | 3.39 | April 2022 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3391) | | 3.38 | March 2022 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3391) | | 3.37 | February 2022 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3391) | | 3.36 | January 2022 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3363) | | 3.35 | December 2021 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3352) | | 3.34 | November 2021 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3352) | | 3.33 | October 2021 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3332) | | 3.32 | September 2021 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3321) | | 3.31 | August 2021 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3321) | | 3.30 | July 2021 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3321) | | 3.29 | June 2021 | ❌ | [Notes](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/main/CHANGELOG.md#3321) |This page displays the docs for legacy Sourcegraph versions less than 5.1
Explore to learn more about Sourcegaph tutorials and guides.
## Find Your Way Around Sourcegraph | Topic | Content Type | Description | | ---------------------------------------------------------------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | | [Sourcegraph 101](/getting-started/) | Explanation | What is Sourcegraph? Who should use it? Why do I need it? What does it do? | | [High Level Product Demo](https://www.youtube.com/watch?v=Kk1ea2-l8Hk) | Explanation (video) | A short 3-minute video describing what Sourcegraph is and how it can be useful. | | [Navigating the Sourcegraph UI](https://www.youtube.com/watch?v=6K7e74a7aC4) | Tutorial (video) | Take a look at how you can read code, find references, troubleshoot errors, gain insight, and make changes on a massive scale in Sourcegraph. | ## Get Started Searching | Topic | Content Type | Description | | -------------------------------------------------------------------------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Three Types of Code Search](https://www.youtube.com/watch?v=-EGn_-2d9CQ) | Tutorial (video) | Code search is a vital tool for developers. It's great for digging up answers to questions from your own codebase, but it's even better for exploring and understanding code. This video will show you the types of code search we have on Sourcegraph. | | [Understanding Code Search Results](https://www.youtube.com/watch?v=oMWdYfG6-DQ) | Tutorial (video) | In this video, you'll understand the search results page and how to scope, export, save, and link to search results. | | [Basic Code Search Filters](https://www.youtube.com/watch?v=h1Kw0Wd9qZ4) | Tutorial (video) | In this video, you'll learn how to use Sourcegraph's code search filters and how they work. Filters are a great way to narrow down or search for specific code. This video covers language, repo, branch, file, and negative filters. | | [Search Query Syntax](/code-search/queries) | Reference | This page describes search pattern syntax and filters available for code search | ## More Advanced Searching | Topic | Content Type | Description | | ------------------------------------------------------------------------------------------------------------------------------------------ | ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Search Subexpressions](/code-search/working/search_subexpressions) | Tutorial | Search subexpressions combine groups of filters like `repo:` and operators like `AND` & `OR`. Compared to basic examples, search subexpressions allow more sophisticated queries. | | [Regular Expression Search Deep Dive](https://about.sourcegraph.com/blog/how-to-search-with-sourcegraph-using-regular-expression-patterns) | Tutorial | Regular expressions, often shortened as regex, help you find code that matches a pattern (including classes of characters like letters, numbers and whitespace), and can restrict the results to anchors like the start of a line, the end of a line, or word boundary. | | [Structural Search Tutorial ](https://about.sourcegraph.com/blog/how-to-search-with-sourcegraph-using-structural-patterns) | Tutorial | Structural search helps you search code for syntactical code patterns like function calls, arguments, `if...else` statements, and `try...catch` statements. It's useful for finding nested and recursive patterns as well as multi-line blocks of code. | | [Structural Search Tutorial ](https://youtu.be/GnubTdnilbc) | Tutorial (video) | Structural search helps you search code for syntactical code patterns like function calls, arguments, `if...else` statements, and `try...catch` statements. It's useful for finding nested and recursive patterns as well as multi-line blocks of code. | ## Cody | Topic | Content Type | Description | | ------------------------------------------------------------------------------------------------------------------------------------------ | ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Product Tour is VSCode](https://www.youtube.com/watch?v=OTIYXUwAQtI&t=6s) | Tutorial (video) | A video tutorial walkthrough of Cody in VSCode. | |[Cody Admin Training](https://www.youtube.com/watch?v=_Xwr7YlfTt0) | Tutorial (video) | A video tutorial explaining Cody administrative functionality.| |[Cody Tutorial in VS Code](/cody/use-cases/vsc-tutorial) | Tutorial (written) | A single documentation page with animated .gifs, demonstrating Cody features in VS Code | ## Code Navigation | Topic | Content Type | Description | | ----------------------------------------------------------------------------------------------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------- | | [Introduction to Code Navigation](/code-search/code-navigation/) | Explanation | There are 2 types of code navigation that Sourcegraph supports: search-based and precise. | | [Code Navigation Features](/code-search/code-navigation/features) | Explanation | An overview of Code Navigation features, such as "find references", "go to definition", and "find implementations". | ## Code Insights | Topic | Content Type | Description | | ------------------------------------------------------------------------------- | ------------------- | ------------------------------------------------------------------------------------------- | | [Code Insights Overview](https://www.youtube.com/watch?v=HdQIFuUzGFI) | Explanation (video) | Learn about common Code Insights use cases and see how to create an insight. | | [Quickstart Guide](/code_insights/quickstart) | Tutorial | Get started and create your first code insight in 5 minutes or less. | | [Common Use Cases](/code_insights/references/common_use_cases) | Reference | A list of common use cases for Code Insights and example data series queries you could use. | ## Batch Changes | Topic | Content Type | Description | | ----------------------------------------------------------------------------------------------------------------------------- | ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | [Introduction to Batch Changes](/batch-changes/) | Explanation | A basic introduction to the concepts, processes, and supported environments behind Batch Changes | | [Get Started With Batch Changes](https://www.youtube.com/watch?v=GKyHYqH6ggY) | Tutorial (video) | Learn how you can quickly use Sourcegraph Batch Changes to automate small and large-scale code changes server-side. | | [Batch Changes Quickstart Guide](/batch-changes/quickstart) | Tutorial | Get started and create your first batch change in 10 minutes or less. This guide follows the local (CLI) method of running batch changes. | | [Getting Started Running Batch Changes Server-Side](/batch-changes/server-side) | How-To-Guide | Follow this guide to learn how to run batch changes server-side. | ## The Sourcegraph API | Topic | Content Type | Description | | ---------------------------------------------------------- | ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [GraphQL API](/api/graphql/) | Reference | The Sourcegraph GraphQL API is a rich API that exposes data related to the code available on a Sourcegraph instance. | | [GraphQL Examples](/api/graphql/managing-search-contexts-with-api) | Reference | This page demonstrates a few example GraphQL queries for the Sourcegraph GraphQL API. | | [Streaming API](/api/stream_api/) | Reference | With the Stream API you can consume search results and related metadata as a stream of events. The Sourcegraph UI calls the Stream API for all interactive searches. Compared to our GraphQL API, it offers shorter times to first results and supports running exhaustive searches returning a large volume of results without putting pressure on the backend. | ## Search Notebooks | Topic | Content Type | Description | | --------------------------------------------------------------------------- | ------------ | ----------------------------------------------------------------------------------------------------------- | | [Search Notebooks Quickstart Guide](/notebooks/quickstart) | Tutorial | Notebooks enable powerful live–and persistent–documentation, shareable with your organization or the world. | ## Customizing Your Sourcegraph User Environment | Topic | Content Type | Description | | ------------------------------------------------------------------------------------------------ | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Using Sourcegraph with your IDE](/integration/editor) | How-To-Guide | Sourcegraph’s editor integrations allow you search and navigate across all of your repositories without ever leaving your IDE or checking them out locally. We have built-in integrations with VS Code and JetBrains. | | [Using the Sourcegraph Browser Extension](/integration/browser_extension/) | How-To-Guide | The open-source Sourcegraph browser extension adds code navigation to files and diffs on GitHub, GitHub Enterprise, GitLab, Phabricator, Bitbucket Server and Bitbucket Data Center. | | [Using the Sourcegraph CLI](/cli/quickstart) | How-To-Guide | `src` is a command line interface to Sourcegraph that allows you to search code from your terminal, create and apply batch changes, and manage and administrate repositories, users, and more. | | [Saving Searches](/code-search/working/saved_searches) | How-To-Guide | Saved searches let you save and describe search queries so you can easily find and use them again later. You can create a saved search for anything, including diffs and commits across all branches of your repositories. | | [Search Contexts](/code-search/working/search_contexts) | How-To-Guide | Search contexts help you search the code you care about on Sourcegraph. A search context represents a set of repositories at specific revisions on a Sourcegraph instance that will be targeted by search queries by default. | ## Developer Use Cases | Topic | Content Type | Description | | ------------------------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------ | | [Writing Faster Code with Code Search and Cody](https://www.youtube.com/watch?v=vHyTgqGL41U) | Tutorial (video) | A video tutorial of how Sourcegraph can help you write code faster. | | [Generate Unit Tests](https://www.youtube.com/watch?v=FsiQMory2jI) | Tutorial (video) | A video tutorial of how to write unit tests with Cody. | | [Understand scope and impact of a code change](https://www.youtube.com/watch?v=gafdZDhSrws) | Tutorial (video) | A video tutorial of how to understand the scope and impact of a change you're considering to your codebase. |This document explains the Sourcegraph's default contractual Service Level Agreements and Premium Support Offerings.
## Service Level Agreements (SLAs) Our service level agreements (SLAs) are designed for products that are generally available and exclude [beta and experimental features](/admin/beta_and_experimental_features). SLA response times indicate how quickly we aim to provide an initial response to your inquiries or concerns. Our team will resolve all issues as quickly as possible. However, it's important to understand that SLA times differ from guaranteed resolution times. While we always strive to respond to your issues as quickly as possible, our SLAs are specifically applicable from Monday through Friday. The following policy applies to both our cloud-based (managed instance) and on-premise/self-hosted Sourcegraph customers: ## For enterprise plans | Severity level | Description | Response time | Support availability | | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | -------------------- | | 0 | Emergency: Total loss of service or security-related issue (includes POCs) | Within two business hours of identifying the issue | 24x5 (Monday-Friday) | | 1 | Severe impact: Service significantly limited for 60%+ of users; core features are unavailable or extremely slowed down with no acceptable workaround | Within four business hours of identifying the issue | 24x5 (Monday-Friday) | | 2 | Medium impact: Core features are unavailable or somewhat slowed; workaround exists | Within eight business hours of identifying the issue | 24x5 (Monday-Friday) | | 3 | Minimal impact: Questions or clarifications regarding features, documentation, or deployments | Within two business days of identifying the issue | 24x5 (Monday-Friday) | >NOTE: Premium support with enhanced SLAs can be added to your Enterprise plans as an add-on. Our business hours, defined as Sunday 2 PM PST to Friday 5 PM PST, align with our 24x5 support coverage. ### For Enterprise Starter & Cody Pro plans | **Severity level** | **Description** | **Response time** | **Support availability** | | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | -------------------- | | 0 | Emergency: Total loss of service or security-related issue | Within one business day of identifying the issue | 8 AM to 5 PM PST (Monday-Friday) | | 1 | Severe impact: Billing issues, login issues | Within one business day of identifying the issue |8 AM to 5 PM PST (Monday-Friday) | | 2 | Medium impact: Core features are unavailable or somewhat slowed; workaround exists | Within two business days of identifying the issue | 8 AM to 5 PM PST (Monday-Friday) | | 3 | Minimal impact: Questions or clarifications regarding features, documentation, or deployments | Within three business days of identifying the issue | 8 AM to 5 PM PST (Monday-Friday) |This page lists a detailed comparison of the features available in each plan.
| **Features** | **Free** | **Enterprise Starter** | **Enterprise** | | -------------------------------- | ----------------------------------------------------- | ----------------------------------------------------- | ----------------------------------------------------- | | **AI** | | | | | Autocomplete | Unlimited | Unlimited | Unlimited | | Chat messages and prompts | 200/month | Increased limits | Unlimited | | Code context and personalization | Local codebase | Remote codebase (GitHub only) | Remote, enterprise-scale codebases | | Integrated search results | - | ✓ | ✓ | | Prompt Library | ✓ | ✓ | ✓ | | Bring your own LLM Key | - | - | Self-Hosted only | | Auto-edit | - | Beta | Beta | | Aentic chat experience | - | Experimental | Experimental | | **Code Search** | | | | | Code Search | - | ✓ | ✓ | | Code Navigation | - | ✓ | ✓ | | Code Insights | - | - | ✓ | | Code Monitoring | - | - | ✓ | | Batch Changes | - | - | ✓ | | **Deployment** | | | | | Cloud deployment | Multi-tenant | Multi-tenant | Single tenant | | Self hosted option | - | - | ✓ | | Private workspace | - | ✓ | ✓ | | **Admin and Security** | | | | | SSO/SAML | Basic (GH/GL/Google) | Basic (GH/GL/Google) | ✓ | | Role-based access control | - | - | ✓ | | Analytics | - | Basic | ✓ | | Audit logs | - | - | ✓ | | Guardrails | - | - | Beta | | Indexed code | - | Private | Private | | Context Filters | - | - | ✓ | | **Compatibility** | | | | | Code hosts | Local codebase | GitHub | All major codehosts | | IDEs | VS Code, JetBrains IDEs, Visual Studio (Experimental) | VS Code, JetBrains IDEs, Visual Studio (Experimental) | VS Code, JetBrains IDEs, Visual Studio (Experimental) | | Human languages | Many human languages, dependent on the LLM used | Many human languages, dependent on the LLM used | Many human languages, dependent on the LLM used | | Programming languages | All popular programming languages | All popular programming languages | All popular programming languages | | **Support** | | | | | Support level | Community support | Community support | Enterprise support | | Dedicated TA support | - | - | Add-on | | Premium support | - | - | Add-on |Learn about some of the most commonly asked questions about Sourcegraph.
## What's the difference between Free, Enterprise Starter, and Enterprise plans? Free is best for individuals working on hobby projects. Enterprise Starter is for growing organizations who want Sourcegraph's AI & search experience hosted on our cloud. Enterprise is for organizations that want AI and search across the SDLC with enterprise-level security, scalability, and flexible deployment. ## How are autocompletions counted for the Cody Free plan? Cody autocompletions are counted based on the number of suggestions served to the user in their IDE as ghost text. This includes all suggestions on whether or not the user accepts them. ## How does Sourcegraph's context and personalization work? Cody can retrieve codebase context to personalize responses in several ways. For Free and Pro users, context is retrieved from their local repositories. The Enterprise Starter and Enterprise plans use Sourcegraph's search backend to retrieve context. This method pulls context from a team's full codebase at any scale. ## What forms of support are available for paid plans? Email and web portal support is available to both Enterprise Starter and Enterprise customers, and you can [read more about our SLAs](/sla). Premium support with enhanced SLAs is also available as an add-on for Enterprise customers. ## Can I upgrade or downgrade my plan? Users can upgrade or downgrade between Free and Pro within their account settings anytime. For Pro users, upgrading to Enterprise Starter is also possible, but doing so currently does not cancel your subscription, and you must cancel it yourself. To upgrade to Enterprise, please [contact our Sales team](https://sourcegraph.com/contact/request-info). ## What's the difference between "flexible LLM options" and "bring your own LLM key"? Flexible LLM options: Users can select from multiple options to use for chat. Bring your own LLM key: Enterprise customers can optionally provide their own LLM API key for supported LLMs (including for LLM services such as Azure OpenAI and Amazon Bedrock). In this scenario, customers pay for their own LLM consumption, and we will provide a pricing discount with your plan. ## Does Sourcegraph use my code to improve the models used by other people? For Enterprise and Enterprise Starter customers, Sourcegraph will not train on your company's data unless your instance admin enables fine-tuning, which would customize an existing model exclusively for your use. For Free and Pro users, Sourcegraph may use your data to fine-tune the model you are accessing unless you disable this feature. ## Can Sourcegraph be run fully self-hosted? Sourcegraph requires cloud-based services to power its AI features. For customers looking for a fully self-hosted or air-gapped solution, please [contact us](https://sourcegraph.com/contact/request-info). ## Is an annual contract required for any of the plans? Pro and Enterprise Starter plans are billed monthly and can be paid with a credit card. ## How are active users counted and billed?Sourcegraph offers three different pricing plans based on your needs.
Learn about the Sourcegraph's Free plan and the features included.
Sourcegraph's Free plan is designed for hobbyists, and light usage is aimed at users with personal projects and small-scale applications. It offers an AI editor assistant with a generous set of features for individual users, like autocompletion and multiple LLM choices for chat. ## Features The Free plan includes the following features: | **AI features** | **Compatibility** | **Deployment** | **Admin/Security** | **Support** | | ----------------------------------------------------------------------------- | --------------------------------------------------- | ------------------ | ------------------------------------------ | ---------------------- | | Reasonable use autocomplete limits | VS Code, JetBrains IDEs, and Visual Studio | Multi-tenant Cloud | SSO/SAML with basic GitHub, GitLab, Google | Community support only | | Reasonable use chat messages and prompts per month | All popular coding languages | - | - | - | | Multiple LLM selection (Claude 3.5 Sonnet, Gemini 1.5 Pro and Flash) | Natural language search | - | - | - | ## Pricing and billing cycle There is no billing cycle, as it's free to use and supports one user per account. If you exceed your daily limits, you will have to wait until the end of the month to use the feature again. You can upgrade to our Enterprise Starter plan for more advanced features and usage limits. ## Free vs. Enterprise Starter comparison The Enterprise Starter plan provides extended usage limits and advanced features compared to the Free plan. Here's a side-by-side comparison of the two: | **Feature** | **Free** | **Enterprise Starter** | | ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Description** | - AI editor assistant for hobbyists or light usage | - AI and search for growing organizations hosted on our cloud | | **Price** | - $0/monthLearn about the Sourcegraph's Enterprise plan and the features included.
Sourcegraph offers multiple Enterprise plan options, including Enterprise Dedicated Cloud (default) and Enterprise Self Hosted (on-request) for organizations and enterprises that need AI and search with enterprise-level security, scalability, and flexibility. ## Features breakdown Here's a detailed breakdown of features included in the different Enterprise plan options. | **Feature** | **Enterprise Dedicated Cloud** | **Enterprise Self Hosted** | | ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Description** | - AI and search with enterprise-level security, scalability, and flexibility | - AI and search with enterprise-level security, scalability, and flexibility | | **Price** | - $59/user/monthLearn about the Enterprise Starter plan tailored for individuals and teams wanting private code indexing and search to leverage the Sourcegraph platform better.
The Enterprise Starter plan offers a multi-tenant Sourcegraph instance designed for individuals and teams. It provides the core features of a traditional Sourcegraph instance but with a simplified management experience. This plan provides a fully managed version of Sourcegraph (AI + code search with integrated search results, with privately indexed code) through a self-serve flow. ## Team seats The Enterprise Starter plan is priced at **$19 per month per seat**. You can add or remove team members at any time. Existing Cody Pro users can also sign up for the Enterprise Starter by paying $19 per seat. However, their Cody Pro subscription will neither be upgraded nor canceled. Instead, they will have two live subscriptions. ## Enterprise Starter team roles The Enterprise Starter plan includes the following team-level roles: - **Admin**: Has full access to the workspace, including the ability to manage repos, users, billing, and settings - **Member**: Can access repositories in the workspace and use the supported features ## Billing Workspaces on the Enterprise Starter plan are billed monthly based on the number of team seats purchased. In case of overdue or failing payment, there is a grace period during which the workspace admins will receive a daily notification to complete the transaction. If you fail to make the payment after the grace period, your workspace will be deleted, and you will not be able to recover your data. Please also see [Billing FAQs](billing-faqs.mdx) for more FAQs, including how to downgrade Enterprise Starter. ## Features supported The Enterprise Starter plan supports a variety of AI and search-based features like: | **AI features** | **Code Search** | **Management** | **Support** | | -------------------------------------- | ------------------------------ | --------------------------------------------------------- | ------------------------- | | Code autocompletions and chat messages | Indexed Code Search | Simplified admin experience with UI-based repo-management | Support with limited SLAs | | Powerful LLM models for chat | Indexed Symbol Search | User management | - | | Integrated search results | Searched based code-navigation | GitHub code host integration | - | | Cody integration | - | - | - | ## Limits Sourcegraph Enterprise Starter offers the following limits: - Max 50 users per workspace - Max 100 repos per workspace - Starts with 5 GB of storage - 1 GB storage per seat added - 10 GB max total storage ## Workspace settings After creating a new workspace, you can switch views between your personal and workspace accounts. You can configure different **Workspace settings**. These include options for: - **General Settings**: Helps you configure how your workspace is described or accessed with options like workspace name, URL, and deleting the workspace - **Users**: Manage permissions, assign seats, or invite new users - **Billing**: Manage your monthly billing cycle for all your purchased seats - **Repository Management**: Add, remove, and view the status of your connected repositories - **User settings**: Navigates you to your personal account's settings  ## Getting started with workspace A workspace admin can invite new members to their workspace using their team member's email address. Once the team member accepts the invitation, they will be added to the workspace and assigned the member role. Next, the member is asked to connect and authorize the code host (GitHub) to access the private repositories indexed in your Sourcegraph account. If you skip this step, the member won't be able to access any of the private repositories they have access to. However, they can still use the public search via the Sourcegraph code search bar.  ## Repository Management From the Repository Management settings, workspace admins can configure various settings for connecting code hosts and indexing repositories in their workspace. You can index up to 100 repos per workspace.  From here, you can: - Use the public code search to add and index open source repos in your workspace - Add multiple organizations to index private repos When you add a new organization, you must authorize access and permission for all repositories or selected ones. To index a repository from your organization, - Click and select it from the repository list - Next, from the search bar, type the repo name you are looking for - Click it to add the repository to your workspace - The status of the repos will change to **TO BE ADDED** in the right sidebar with a **Save Changes** button - Next, the repo gets a **QUEUED** status, and it takes some time to process - Finally, it gets indexed with a **CLONED** status As you add more repos, you get logs for the number of repos added, storage used, and their status. To remove any repo from your workspace, click the repo name that changes the repo status **TO BE REMOVED**. Click the **Save Changes** button to confirm it. Bring the power of Sourcegraph to your code host
The Sourcegraph browser extension adds code navigation to files and diffs on GitHub, GitHub Enterprise, GitLab, Phabricator, Bitbucket Server and Bitbucket Data Center. - Install Sourcegraph for Chrome - Install Sourcegraph for Safari - Install Sourcegraph for FirefoxShort, consumable videos to help you get started with Sourcegraph.
## [Code Search]()This page will help you learn and understand about Sourcegraph and how to use it.
## What is Sourcegraph? Sourcegraph is a Code Intelligence platform that deeply understands your code, no matter how large or where it's hosted, to power modern developer experiences. ## Who should use Sourcegraph? In addition to the [companies listed on about.sourcegraph.com](https://about.sourcegraph.com), companies with a few hundred developers up to those with more than 40,000 use Sourcegraph daily. More specifically, Sourcegraph is great for all developers except: - those on smaller teams with a small amount of code - those who rarely search, read, or review code ## Why do I need Code Search? Facebook and Google provide their employees with an in-house Sourcegraph-like code search and intelligence tool. A [published research paper from Google](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43835.pdf) and a [Google developer survey](https://docs.google.com/document/d/1LQxLk4E3lrb3fIsVKlANu_pUjnILteoWMMNiJQmqNVU/edit#heading=h.xxziwxixfqq3) showed that 98% of developers surveyed consider their Sourcegraph-like internal tool to be critical. Developers use it on average for 5.3 sessions each day. (Facebook's and Google's in-house tools are unavailable to other companies; use Sourcegraph instead.) ## What do I use Sourcegraph for? Sourcegraph helps you: - Find example code - Explore/read code (including during a code review) - Debug issues - Locate a specific piece of code - Determine the impact of changes - And more! Sourcegraph makes it easier for you and everyone else in your organization to perform these tasks. ## What does Sourcegraph do? Sourcegraph's main features are: - [Code navigation](#code-navigation): jump-to-definition, find references, and other smart, IDE-like code browsing features on any branch, commit, or PR/code review - [Code search](#code-search): fast, up-to-date, and scalable, with regexp support on any branch or commit without an indexing delay (and diff search) - [Notebooks](#notebooks): pair code and markdown to create powerful live and persistent documentation - [Cody](#cody): read and write code with the help of a context-aware AI code assistant - [Code Insights](#code-insights): reveal high-level information about your codebase at its current state and over time to track migrations, version usage, vulnerability remediation, ownership, and anything else you can search in Sourcegraph - [Batch Changes](#batch-changes): make large-scale code changes across many repositories and code hosts - [Integrations](#integrations) with code hosts, code review tools, editors, web browsers, etc. ## How do I start using Sourcegraph? 1. [Deploy and Configure Sourcegraph](/admin/deploy/) inside your organization on your internal code if nobody else has yet 1. Install and configure the [web browser code host integrations](/integration/browser_extension) (recommended) 1. Start searching and browsing code on Sourcegraph by visiting the URL of your organization's internal Sourcegraph instance 1. [Personalize Sourcegraph](/getting-started/personalization/) with themes, quick links, and badges! You can also try [Sourcegraph.com](https://sourcegraph.com/search), a public Sourcegraph instance for use on open-source code only. ## How is Sourcegraph licensed? Sourcegraph Enterprise is Sourcegraph's primary offering and includes all code intelligence platform features. Sourcegraph Enterprise is the best solution for enterprises who want to use Sourcegraph with their organization's code. Sourcegraph extensions are also OSS licensed (Apache 2), such as: - [Sourcegraph browser extension](https://github.com/sourcegraph/sourcegraph/tree/master/client/browser) - [Sourcegraph Jetbrains extension](https://github.com/sourcegraph/sourcegraph/tree/main/client/jetbrains) ## How is Sourcegraph different from the GitHub code search? - [See how GitHub code search compares to Sourcegraph](/getting-started/github-vs-sourcegraph) ## Code Search Sourcegraph code search is fast, works across all your repositories at any commit, and has minimal indexing delay. Code search also includes advanced features, including: - [Powerful, flexible query syntax](/code-search/queries) - [Commit diff search](/code-search/features#commit-diff-search) - [Commit message search](/code-search/features#commit-message-search) - [Saved search scopes](/code-search/features#search-scopes) - [Search contexts to search across a set of repositories at specific revisions](/code-search/features#search-contexts) - [Saved search monitoring](/code_monitoring/) Read the [code search documentation](/code-search/) to learn more and discover the complete feature set. Here's a video to help you get started: - [How to Search commits and diffs with Sourcegraph](https://youtu.be/w-RrDz9hyGI) - [Search Examples](https://sourcegraph.github.io/sourcegraph-search-examples/) ## Code Navigation Sourcegraph gives your development team cross-repository IDE-like features on your code: - Hover tooltips - Go-to-definition - Find references - Symbol search Sourcegraph gives you code navigation in: - **code files in Sourcegraph's web UI**  - **diffs in your code review tool**, via [integrations](/integration/)  - **code files on your code host**, via [integrations](/integration/)  Please read the [code navigation documentation](/code-search/code-navigation/) to learn more and to set it up. ## Cody Cody is an AI code assistant that uses Sourcegraph code search, the code graph, and LLMs to provide context-aware answers about your codebase. Cody can explain code, refactor code, and write code, all within the context of your existing codebase. [Learn more about Cody](/cody/overview/). ## NotebooksGitHub | Sourcegraph | |
GitHub | ✓ | ✓ |
GitLab | ✗ | ✓ |
Bitbucket Cloud | ✗ | ✓ |
Bitbucket Server / Bitbucket Data Center | ✗ | ✓ |
Perforce | ✗ | ✓ |
Any Git-based code host | ✗ | ✓ |
Learn about the different plans available for Cody.
Cody provides three subscription plans: **Free**, **Pro**, and **Enterprise**. Each plan is aimed to cater to a diverse range of users, from individual projects to large-scale enterprises. Cody Free includes basic features, while the Pro and Enterprise plans offer additional advanced features and resources to meet varying user requirements.Learn about common reasons for errors that you might run into when using Cody and how to troubleshoot them.
If you encounter errors or bugs while using Cody, try applying these troubleshooting methods to understand and configure the issue better. If the problem persists, you can report Cody bugs using the [issue tracker](https://github.com/sourcegraph/cody/issues), by using the [Support Forum](https://community.sourcegraph.com/), or by asking in the [Discord](https://discord.gg/s2qDtYGnAE) server. ## VS Code extension ### Cody is not responding in chat If you're experiencing issues with Cody not responding in chat, follow these steps: - Ensure you have the latest version of the [Cody VS Code extension](https://marketplace.visualstudio.com/items?itemName=sourcegraph.cody-ai). Use the VS Code command `Extensions: Check for Extension Updates` to verify - Check the VS Code error console for relevant error messages. To open it, run the VS Code command `Developer: Toggle Developer Tools` and then look in the `Console` for relevant messages ### Cody responses/completions are slow If you're experiencing issues with Cody's responses or completions being too slow: - Ensure you have the latest version of the [Cody VS Code extension](https://marketplace.visualstudio.com/items?itemName=sourcegraph.cody-ai). Use the VS Code command `Extensions: Check for Extension Updates` to verify - Enable verbose logging, restart the extension and reproduce the issue seen again (see below `Access Cody logs` for how to do this - Send information to the our Support Team at support@sourcegraph.com Some additional information that will be valuable: - Where are you located? Any proxies or firewalls in use? - Does this happen with multiple providers/models? Which models have you used? ### Access Cody logs VS Code logs can be accessed via the **Outputs** view. You will need to set Cody to verbose mode to ensure important information to debug is on the logs. To do so: - Go to the Cody Extension Settings and enable: `Cody › Debug: Verbose` - Restart or reload your VS Code editor - You can now see the logs in the Outputs view - Open the view via the menu bar: `View > Output` - Select **Cody by Sourcegraph** from the dropdown list - You can export the logs by using the command palette (Cmd+Shift+P on Mac, Ctrl+Shift+P on Windows/Linux) and searching for the "Cody Debug: Export Logs" command  ### Errors trying to install Cody on macOS If you encounter the following errors: ```bash Command 'Cody: Set Access Token' resulted in an error Command 'cody.set-access-token' not found ``` Follow these steps to resolve the issue: - Close your VS Code editor - Open your Keychain Access app - Search for `cody` - Delete the `vscodesourcegraph.cody-ai` entry in the system keychain on the left - Reopen the VS Code editor. This should resolve the error  ### Signin fails on each VS Code restart If you find yourself being automatically signed out of Cody every time you restart VS Code, and suspect it's due to keychain authentication issues, you can address this by following the steps provided in the official VS Code documentation on [troubleshooting keychain issues](https://code.visualstudio.com/docs/editor/settings-sync#_troubleshooting-keychain-issues). These guidelines should help you troubleshoot and resolve any keychain-related authentication issues, ensuring a seamless experience with Cody on VS Code. ### No context files were included by Cody If Cody's answers don't seem accurate, it may be because Cody is unable to find the right relevant files to use as context. You can see which files Cody used in the **Context** row right below your message. To troubleshoot further: 1. Enable the `cody.debug.verbose` setting in your VS Code user settings. 1. Open the **Cody by Sourcegraph** output channel in VS Code. 1. Look for log messages such as the following: ```bash █ SimpleChatPanelProvider: getEnhancedContext > search █ symf: using downloaded symf "/Users/beyang/Library/Application Support/Code/User/globalStorage/sourcegraph.cody-ai/symf/symf-v0.0.6-aarch64-macos" █ SimpleChatPanelProvider: getEnhancedContext > search (end) █ DefaultPrompter.makePrompt: Ignored 8 additional context items due to limit reached ``` ### Rate limits Cody Free provides **unlimited autocomplete suggestions** and **200 chat invocations** per user per month. On Cody Pro and Enterprise plans, usage limits are increased, and controlled by **Fair Usage**. This means that some users occasionally experience a limitation placed on their account. This limitation resets within 24 hours. If this issue persists, contact us through our [community forum](https://community.sourcegraph.com), Discord, or email support@sourcegraph.com. #### 429 errors A 429 status code means you are on a free account and hit your usage limit/quota for the day. It can also mean you were sending too many requests in a short period of time. If you have Cody Pro and you are seeing 429 errors, you can contact us at [support@sourcegraph.com](mailto:support@sourcegraph.com) to resolve this. ### Error logging in VS Code on Linux and Windows If you encounter difficulties logging in to Cody on Linux using your Sourcegraph instance URL, along with a valid access token, and notice that the sign-in process in VS Code hangs, it might be related to underlying networking rules concerning SSL certificates. To address this, follow these steps: - Close your VS Code editor - In your terminal, type and run the following command: `echo "export NODE_TLS_REJECT_UNAUTHORIZED=0">> ~/.bashrc` - Restart VS Code and try the sign in process again On Windows, - Close Visual Studio Code - In your Command Prompt or PowerShell window, run the following command: `setx NODE_TLS_REJECT_UNAUTHORIZED 0` - Restart Visual Studio Code and try the sign in process again ### Cloudflare Request Failed If you encounter this error: ``` Request Failed: Request to https://sourcegraph.com/.api/completions/stream?api-version=1&client-name=vscode&client-version=1.34.3 failed with 403 Forbidden ``` It indicates that our web application firewall provider, Cloudflare, has flagged the source IP as suspicious. Consider disabling anonymizers, VPNs, or open proxies. If using a VPN is essential, you can try [1.1.1.1](https://one.one.one.one), which is recognized to be compatible with our services. ### Error with Cody `contextFilters` during chat or editing code The `contextFilters` setting in Cody is used to control which files are included or excluded when Cody searches for relevant context while answering questions or providing code assistance. Sometimes, you can see the following error: ``` Edit failed to run: file is ignored (due to cody.contextFilters Enterprise configuration setting) ``` This error occurs when you're trying to work with a file that's been excluded by Cody's enterprise-level `contextFilters` configuration. At times, this can happen to files that also haven't been excluded. First, check with your organization's admin to understand which files are excluded. If the error occurs with a file that's not been excluded, the workaround is to uninstall the Cody plugin, restart your IDE and then reinstall the latest version of the extension. This should clear the error. ### VS Code Pro License Issues If VS Code prompts you to upgrade to Pro despite already having a Pro license, this usually happens because you're logged into a free Cody/Sourcegraph account rather than your Pro account. To fix this: - Check which account you're currently logged into - If needed, log out and sign in with your PRO account credentials ### Error exceeding `localStorage` quota When using Cody chat, you may come across this error: ```bash Failed to execute 'setItem' on 'Storage': Setting the value of 'user-history:$user_id' exceeded the quota. ``` This error indicates that the chat history size surpasses the capacity of your browser's local storage. Cody stores comprehensive context data with each chat message, contributing to this limitation. To fix this, navigate to https://sourcegraph.your-domain.com/cody/chat and click `Clear Chat History` if your instance is on v5.2.3+. For older versions, clear your browsing data or browser history. ### Record performance traces for Cody You can get performance traces from the Cody VS Code extension in production with full support for source maps. To do so: - Start VS Code with a special runtime flag. In macOS, you can do so via the terminal like this: ```bash /Applications/Visual\ Studio\ Code.app/Contents/MacOS/Electron --inspect-extensions=9333 ``` Note that you may need to quit VSCode first, then run that command to re-launch it. It will open all of your windows and tabs again. - After VS Code is started, head over to Chrome and go to `chrome://inspect`, which takes you to the following:  - Configure the inspect server you started on port `9333` from here. To do so, click on **Open dedicated DevTools for Node**, then go to the **Connection** tab, and make sure to add `localhost:9333` to the list  - Now head back to the `chrome://inspect` tab, and you should see a new remote target that you can inspect  - Clicking this will open a (somewhat reduced) DevTools view. Great! We've almost got it. From here you can go to the **Performance** tab to record a trace. And finally, swap tabs to the VS Code window and interact with the extension. Come back later to stop the recording and export it.  ### Record a CPU profile for Cody in VSCode If you are experiencing performance issues with Cody in VSCode, recording a CPU profile can help diagnose the problem. Here’s how you can capture a CPU profile: 1. **Open Developer Tools**: - In VSCode, go to the Command Palette (`Ctrl+Shift+P` on Windows/Linux, `Cmd+Shift+P` on macOS). - Type `Developer: show running extensions` and select it to open running extensions. 2. **Record the CPU Profile**: - There you should see Cody in the list. Right click on it and start a host profile. - This will show you the running profile in the bottom right of the window. - Reproduce the issue you are experiencing with Cody. - Once you have reproduced the issue, click the message in the bottom right you stop the trace. 3. **Save the CPU Profile**: - After stopping the trace, you need to go back to the Developer: show running extensions view. - Right click Cody and click `Save extension profile`. 4. **Share the CPU Profile**: - Attach the `.cpuprofile` file to your issue report on GitHub or share it with the support team for further analysis. Following these steps will help the team understand and resolve the performance issues more effectively. ## JetBrains IntelliJ extension ### Access Cody logs JetBrains logs can be accessed via the **Output** panel. To access logs, you must first enable Cody logs from the Settings menu. To do so: - Open the Settings panel (`⌘,` for macOS) (`Ctrl+Alt+S` for Windows/Linux) - Go to `Sourcegraph & Cody` - Click on `Cody` - Check the box to Enable debug - Optionally, select the box to enable Verbose debug - Click Apply - To access the logs, go to Help - Show Log in Finder - Open the `idea.log` file ### High CPU Usage with Cody Agent If you notice the Cody agent reaching 100% CPU utilization, try the following: 1. Disable the Cody plugin. 1. Re-enable the plugin. This simple action of turning the plugin off and on again can often resolve the high CPU usage issue. ## Android Studio extension ### Cody cannot start. Stuck on spinning icon. This issue occurs because JCEF isn't supported in Android Studio and causes Cody to not start. The suggested workaround is to: 1. Go to `Help` > `Find Action: Registry`. 1. Scroll to `ide.browser.jcef.sandbox.enable`. 1. Disable that key and close. 1. Then go to `Help` > `Find Action: Choose Boot runtime for the IDE`. 1. Select the last option. 1. Restart Android Studio. ## Regular Expressions ### Asterisks being removed If you send Cody a prompt with a query string like `$filteredResults = preg_grep('*\.' . basename($inputPath) . '\.*', $fileList);`, the asterisks may be removed because Cody interprets the content as a literal string rather than code. When sharing code with Cody, wrap your code in triple backticks (```) to ensure it's recognized as a code block rather than plain text. For example: ````regex ``` $filteredResults = preg_grep('*\.' . basename($inputPath) . '\.*', $fileList); ``` ```` ## Forked Repos As Context If you would like to add a forked repository as Cody context, you may need to add `"search.includeForks": true` to the [global settings](/admin/config/settings#editing-global-settings-for-site-admins) for your instance. {/* ## Eclipse extension ### See a white screen the first time you open Cody chat This can happen if Eclipse prompts you to set up a password for secure storage and Cody timeouts while waiting. Simply close and re-open the Cody chat. ### "No password provided" in the error log If you see this error in the error log, it happens because the default OS password integration has been corrupted. Go to **Preferences > General > Security > Secure Storage** and ensure your OS integration is checked. Then click **Clear Passwords** at the top, and then click **Change Password**. If you see a dialog saying **An error occurred while decrypting stored values... Do you want to cancel password change?** Click **No**. This will reset the secure storage master password for OS integration. You will be asked if you want to provide additional information for password recovery, which is optional. Click **Apply and Close** and then restart Eclipse. ### General Tips You can open the Cody Log view using the same steps as above, but instead, select **Cody Log**.  This will include more information about what Cody is doing, including any errors. There is a copy button at the top right of the log view that you can use to copy the log to your clipboard and send it to us. Be careful not to include any sensitive information, as the log communication is verbose and may contain tokens. Additionally, Eclipse's built-in Error Log can be used to view any uncaught exceptions and their stack traces. You can open the Error Log using the **Window > Show View > Error Log** menu. */} ## OpenAI o1 ### Context Deadline Exceeded Error Symptoms: - "Request Failed: Request to... failed with 500 Internal Server Error: context deadline exceeded" - Happens even with relatively small inputs (~220 lines) Solutions: - Keep input context smaller - aim for less than 200 lines of code initially - Break down larger requests into smaller chunks - Start a new chat session if errors persist - Add "Keep your answer brief!" to prompts when possible Prevention: - Import only the most relevant files - Use file range syntax (e.g., @file:1-100) to limit context - Focus on specific sections rather than entire codebases ### Truncated Outputs Symptoms: - Response cuts off mid-sentence - Unable to get complete code examples - "Continue" requests also result in truncation Solutions: - Break down complex requests into smaller steps - Consider using Sonnet 3.5 for tasks requiring longer outputs Limits: - Input tokens: 45k - Output tokens: 4k ### Model Switching Issues Symptoms: - Model reverts to Sonnet 3.5 unexpectedly - "On waitlist" message appears after previously having access - Unable to select o1 models in command palette Solutions: - Restart IDE/VS Code - Sign out and sign back in - Check Pro subscription status - Contact support if issues persist ### Response Format Errors Symptoms: - "Request Failed: Unexpected response format" - Model stops responding - Inconsistent output formatting Solutions: - Cancel and retry the request - Start a new chat session - Reduce context complexity - Use one-shot prompts with clear requirementsThis quickstart guide shows how to use Cody in the VS Code editor. You'll learn about the following tasks:
- Chat with Cody to ask questions about your code - Code completions and suggestions - Use Cody to refactor code - Use Cody to debug code and ask Cody to fix bugsTo get the best results from Cody, whether you're exploring a codebase, summarizing a pull request, or generating code, clear and effective prompts are key. This guide will help you write effective prompts to maximize your experience with Cody.
## Why do prompts matter? Prompts are the foundation of how AI coding assistants like Cody interact with large language models (LLMs). They're not just chat inputs—they guide Cody to give precise, contextual, and actionable code suggestions. While Cody handles a lot of prompt engineering under the hood, the quality of your prompts still plays a key role in shaping the results. So, what defines a great prompt? ## Anatomy of a prompt A well-crafted prompt has all the key elements to guide Cody in delivering accurate and relevant responses. You don’t need to include every element in every prompt, but understanding them can help you write more effectively. Let's split these docs into three parts: 1. **Preparation**: How you'll prepare your code for Cody 2. **Prompting**: How you will create effective prompts for Cody 3. **Example prompts**: Examples of prompts for different use cases ## 1. Preparation Before you start writing prompts, preparing your codebase for Cody is essential. Here are some key preparation steps: ## Treat Cody like a new team member When using Cody, it's helpful to treat it like a new team member unfamiliar with your codebase. This approach ensures you provide Cody with the necessary context and information to generate accurate and contextually aware answers. You should focus on providing Cody with all the necessary information, such as the codebase structure, function names, and any relevant docs. The more context and details you provide, the better Cody can assist you in generating relevant and accurate code. For example, ❌ Instead of a vague prompt like: ``` How can I filter products in JavaScript? ``` ✅ Provide a more specific prompt with details: ``` I have an array of product objects in JavaScript, each with the following properties: id, name, category, and price. How can I write a function to filter the products by category? ``` ## Define a persona or role Specify a persona or role in your prompt to provide an extra layer of context to guide Cody. For example, asking Cody to act as a **beginner Python developer** can result in simpler, more beginner-friendly code snippets. ## Choose descriptive variable names Using clear and descriptive names for variables, functions, and classes is essential for making your code readable and understandable for you and Cody. Avoid abbreviations or ambiguous names that may confuse your AI assistant. ✅ Good example: ```php function calculateTotalPrice($items, $taxRate) { // ... } ``` ❌ Bad example: ```php function calc($i, $t) { // ... } ``` ## Write clear code comments and docstrings In addition to variable names, comments, and docstrings are crucial in guiding Cody's understanding of your code. Treat them as a way to communicate with Cody, just like you would with a new engineer. Explain complex logic, algorithms, or project-specific concepts to give Cody context. ✅ Good example: ```javascript /** * Calculate the shipping cost based on the order total. * - For orders under $50, the shipping cost is $5. * - For orders between $50 and $100, the shipping cost is $10. * - For orders above $100, shipping is free. * * @param {number} orderTotal - The total amount of the order. * @returns {number} The shipping cost, determined by the following rules: */ function calculateShippingCost(orderTotal) { // Cody will autocomplete here } ``` A bonus here is that Cody can generate these docstrings for you, so you don't have to write them manually. You can use the **document-code** prompt to do this. ## @-mention context Cody leverages the `@-mention` syntax to source context via files, symbols, web URLs, and more. By default, Cody will automatically detect context from the codebase you're working in via pre-filled context chips in the chat input field. Make sure that when you are working with any codebase, Cody picks up the default context. An empty context chip means Cody will search based on generally available context. You can learn more about context [here](/cody/core-concepts/context). ### Indexing your repositories for context @-mention local and current repositories are only available if you have your repository indexed. Enterprise and Enterprise Starter users can request their admins to add their local project for indexing to get access to @-mention context. Repository indexing is only available to supported [Code Hosts](https://sourcegraph.com/docs/admin/code_hosts), please reach out to your admins if you require assistance with indexing. ## Selecting the right LLM Cody offers a variety of LLMs for both chat and in-line edits by all the leading LLM providers. Each LLM has its strengths and weaknesses, so it is important to select the right one for your use case. For example, Claude 3.5 Sonnet and GPT-4o are powerful for code generation and provide accurate results. However, Gemini 2.0 Flash is a decent choice for cost-effective searches. So, you can always optimize your choice of LLM based on your use case. Learn more about all the supported LLMs [here](/cody/capabilities/supported-models). ## 2. Prompting Now that your code is well-prepared, let's focus on writing effective prompts for Cody via the following best practices: ## Provide specific information When using Cody chat, provide as much detail as possible. Include information about the problem, expected behavior, constraints, and project-specific requirements. Be sure to include comprehensive details about the problem, what you expect to happen, any limitations, and specific requirements related to your project. ❌ Bad example: ``` How do I calculate discounts based on loyalty points in Laravel? ``` ✅ Good example: ``` I need to calculate discounts based on customer loyalty points. - If the customer has loyalty points above 500, apply a 10% discount. - If the customer has loyalty points between 200 and 500, apply a 5% discount. - If the customer has loyalty points below 200, no discount is applied. Create a function that takes the total amount and loyalty points as input and returns an object with the discount percentage and discount amount. ``` ## Provide specific context While preparing your codebase for Cody, you learned about the importance of context chips. In addition to this default context, you can provide additional and more specific context to help Cody better understand your codebase. You can continue to `@-mention` files, symbols, and other context sources (as supported by your Cody tier) to make your search more specific and granular. You should approach this as if explaining the situation to a new team member. You should: - Reference important files and symbols - Provide examples from other similar functions ## Provide examples and test cases You should include examples and test cases when applicable to clarify your expectations. Demonstrate edge cases or handling of special conditions to guide Cody in generating robust code. ❌ Bad example: ``` I need to validate email addresses in JavaScript ``` ✅ Good example: ``` Create a function to validate email addresses in JavaScript. Return true for valid email addresses and false for invalid ones. Here are some example test cases: Valid: - "john@example.com" - "jane.doe@example.co.uk" Invalid: - "john@example" - "jane.doe@example." - "invalid.email" Please write the function ``` ## Iterate and refine Start with a general prompt and incrementally add more details based on Cody's responses. Take advantage of the fact that you can chat with Cody. Bring the back-and-forth conversation, especially if you didn't like Cody's first response. Review the generated code or suggestions, provide feedback, and refine your prompts to get the desired results. Initial prompt: ``` I need to calculate the total price of an order. Help me write the function ``` Refined prompt: ``` Thanks. I forgot to mention: - The function should take an array of order items as input - We need to apply a 10% discount if the total price exceeds $100 - Final total should be rounded to two decimal places Please update the function ``` ## Leverage Cody's capabilities You can utilize many other [Cody's capabilities](/cody/capabilities) for generating boilerplate code, common patterns, and repetitive tasks. Prompt it to assist with unit tests, docs, and error handling to save time and ensure code quality. ✅ Good example: ``` Help me write tests for the `calculateAverageRating` function. Here are the test cases I have in mind: - Empty ratings array should return 0 - Array with one rating should return that rating - Array with multiple ratings should calculate the average correctly Make sure the tests cover any edge cases or potential issues. ``` You can also use the **generate-unit-tests** prompt to generate tests for your code. ## Miscellaneous information Try adding any supplementary details regarding comments, debugging guidelines, error management, required dependencies, or coding styles. For instance, when directing Cody to implement a database query in SQL, specify the need for parameterized queries to prevent SQL injection vulnerabilities and provide suggestions for optimizing query performance. ## Prompts Library To accelerate and automate your work, you can leverage Cody's Prompt Library, which helps you build customizable building blocks (prompts), share your best prompts with your teammates, and enable site administrators to promote the best prompts to the rest of the organization. The Prompt Library is a system for creating and sharing customizable prompts. It is explicitly designed for scalability, repeatability, and flexibility. Learn more about the [Prompts and the Prompt Library here](/cody/capabilities/commands). ## Example Prompts Let's examine some examples of good and reusable prompts that you can create via the Prompt Library.Find answers to the most common questions about Cody.
## General ### Does Cody train on my code? For Enterprise customers, Sourcegraph will not train on your company’s data. For Free and Pro tier users, Sourcegraph will not train on your data without your permission. Our third-party Language Model (LLM) providers do not train on your specific codebase. Cody operates by following a specific process to generate answers to your queries: - **User query**: A user asks a question - **Code retrieval**: Sourcegraph, our underlying code intelligence platform, performs a search and code intelligence operation to retrieve code snippets relevant to the user's question. During this process, strict permissions are enforced to ensure that only code that the user has read permission for is retrieved - **Prompt to Language Model**: Sourcegraph sends a prompt, and the code snippets are retrieved to a Language Model (LLM). This prompt provides the context for the LLM to generate a meaningful response - **Response to user**: The response generated by the LLM is then sent back to Cody and presented to the user This process ensures that Cody can provide helpful answers to your questions while respecting data privacy and security by not training on or retaining your specific code. ### Does Cody work with self-hosted Sourcegraph? Yes, Cody is compatible with self-hosted Sourcegraph instances. However, there are a few considerations: - Cody operates by sending code snippets (up to 28 KB per request) to a third-party cloud service. By default, this service is Anthropic but can also be OpenAI - To use Cody effectively, your self-hosted Sourcegraph instance must have internet access for these interactions with external services ### Is Cody licensed for private code, and does it allow GPL-licensed code? There are no checks or exclusions for Cody PLG (VS Code, JetBrains) for private and GPL-licensed code. We are subject to whatever the LLMs are trained on. However, Cody can be used with [StarCoder for autocomplete](/cody/clients/enable-cody-enterprise#use-starcoder-for-autocomplete) which is trained only on permissively licensed code. ### Is there a public facing Cody API? Currently, there is no public-facing Cody API available. ### Does Cody require Sourcegraph to function? Yes, Cody relies on Sourcegraph for two essential functions: - It is used to retrieve context relevant to user queries - Sourcegraph acts as a proxy for the LLM provider to facilitate the interaction between Cody and the LLM ### What programming languages does Cody support? Cody supports a wide range of programming languages, including: - JavaScript - TypeScript - PHP - Python - Java - C/C++ - C# - Ruby - Go - SQL - Swift - Objective-C - Perl - Rust - Kotlin - Scala - Groovy - R - MATLAB - Dart - Lua - Julia - COBOL - Shell scripting languages (like Bash, PowerShell) Cody's response quality on a programming language depends on many factors, including the underlying LLM being used. We monitor accuracy metrics across all languages and regularly make improvements. [Let us know](https://community.sourcegraph.com/) if you're seeing poor quality on a particular programming language. ### Can Cody answer non-programming questions? Cody Chat is optimized for coding related use cases and can be used primarily for reviewing, analysis, testing, writing, and editing of software code. Use of Cody for any other purposes is against our [acceptable use policy](https://sourcegraph.com/terms/aup) and may result in your account being restricted. ### What happened to the Cody App? We’ve deprecated the Cody App to streamline the experience for our Cody Free and Cody Pro users.The Cody App is no longer available for download. ## Embeddings ### Why were embeddings removed once my instance was upgraded to v5.3? Cody leverages **Sourcegraph Search** as a primary context provider, which comes with the following benefits: - **More secure**: No code being sent to a third-party embedding API - **Easier to manage**: Less tech debt for embeddings setup and need for refreshes - **More repos**: Sourcegraph Search scales to larger repos and a greater number. Users on Enterprise instances will now be able to select multiple repos as context sources from within the IDE - **Equal, or better, quality**: Sourcegraph Search provides high-quality retrieval, as tested over the last ten years. When a customer sees degradation, we will be ready to respond quickly. We leverage multiple retrieval mechanisms to give Cody the right context and will be constantly iterating to improve Cody's quality. The most important aspect is getting the files from the codebase, not the specific algorithm used to find those files. ### Why are embeddings no longer supported on Cody PLG and Enterprise? Cody does not support embeddings on Cody PLG and Cody Enterprise because we have replaced them with Sourcegraph Search. There are two driving factors: - The need for a retrieval system that can scale across repos and to repos of greater size - A system that is secure and requires low maintenance on the part of users Leveraging Sourcegraph Search allowed us to deliver these enhancements. ## LLM Data Sharing and Retention ### Is any of my data sent to DeepSeek? Our autocomplete features uses the open source DeepSeek-Coder-V2 model, which is hosted by Fireworks.ai in a secure single-tenant environment located in the USA. No customer chat or autocomplete data - such as chat messages, or context such as code snippets or configuration - is stored by Fireworks.ai. Sourcegraph does not use models hosted by DeepSeek (the company), and does not send any data to the same. ## Third party dependencies ### What is the default `sourcegraph` provider for completions? The default provider for completions, specified as `"provider": "sourcegraph"` refers to the [Sourcegraph Cody Gateway](/cody/core-concepts/cody-gateway). The Cody Gateway facilitates access to completions for Sourcegraph enterprise instances by leveraging third-party services such as Anthropic and OpenAI. ### What third-party cloud services does Cody depend on? Cody relies on one primary third-party dependency, i.e., Anthropic's Claude API. Users can use this with the OpenAI API configuration. It's worth noting that these dependencies remain consistent when utilizing the [default `sourcegraph` provider, Cody Gateway](/cody/core-concepts/cody-gateway), which uses the same third-party providers. ### What is the retention policy for Anthropic and OpenAI? Please refer to this [terms and conditions](https://about.sourcegraph.com/terms/cody-notice) for details regarding the retention policy for data managed by Anthropic and OpenAI. ### Can I use my own API keys? Yes, [you can use your own API keys](https://sourcegraph.com/docs/cody/clients/install-vscode#experimental-models). However, this is an experimental feature. Bring-your-own-API-key is fully supported in the Enterprise plan. ### Can I use Cody with my Cloud IDE? Yes, Cody supports the following cloud development environments: - vscode.dev and GitHub Codespaces (install from the VS Code extension marketplace) - Any editor supporting the [Open VSX Registry](https://open-vsx.org/extension/sourcegraph/cody-ai), including [Gitpod](https://www.gitpod.io/blog/boosting-developer-productivity-unleashing-the-power-of-sourcegraph-cody-in-gitpod), Coder, and `code-server` (install from the [Open VSX Registry](https://open-vsx.org/extension/sourcegraph/cody-ai)) ### Can I use my LLM of preference to chat with Cody on CLI? Yes you can. In the CLI you can use the following command to get started. Please replace `$name_of_the_model` with the LLM model of your choice. ``` cody chat --model '$name_of_the_model' -m 'Hi Cody!' ``` For example, to use Claude 3.5 Sonnet, you'd pass the following command in your terminal, `cody chat --model 'claude-3.5-sonnet' -m 'Hi Cody!' ### Sign-in dialog gets stuck with Kaspersky Antivirus **Problem:** When attempting to sign in, users may encounter a perpetual `"Signing in to Sourcegraph..."` dialog that never completes. This issue persists across different VS Code extension versions (e.g. 1.40-1.48) and browsers (Chrome, Edge, Firefox). In the browser console at `accounts.sourcegraph.com/sign-in`, you might see an error: `"Uncaught ApolloError: value.toString is not a function"`. **Solution:** This issue is typically caused by Kaspersky Antivirus interfering with the authentication process. To resolve: - Locate the Kaspersky icon in your system tray - Right-click on the Kaspersky icon - Select "Stop Protection" - Right-click the icon again - Select "Exit"In this guide, you'll learn how to use Cody to generate unit tests for your codebase.
Writing unit tests for your code is important for a robust and reliable codebase. These unit tests will help you ensure that your code works as expected and doesn't break in the future. Writing these tests can appear cumbersome, but Cody can automatically generate unit tests for your codebase. This guide will teach you to generate a unit test using Cody with real-time code examples. ## Prerequisites Before you begin, make sure you have the following: - Access to Sourcegraph and are logged into your account - Cody VS Code extension installed and connected to your Sourcegraph instance > NOTE: Cody also provides extensions for JetBrains which are not covered in this guide. ## Generate unit tests with Cody One of the prominent commands in Cody is **Generate unit tests** (`/test`). All you need to do is select a code block and ask Cody to write the tests for you. For this guide, let's use an example that converts any `given date` into a human-readable description of the time elapsed between the `given date` and the `current date`. ## Using the `/test` command Cody offers a default `/test` command that you can use to generate unit tests for your codebase. Inside the Cody chat window, type `/test` and select the code block for which you want to generate a unit test. Hit **Enter**, and Cody will start writing a unit test for you. > NOTE: The unit test command is only available in VS Code and JetBrains editor Cody extensions. Let's create a unit test for the `getHumanReadableTime()` function in the `date.js` file. ```js /** * Returns a human-readable string describing the time difference between the given date string and now. * * Calculates the difference in years, months, and days between the given date string and the current date. * Formats the time difference into a descriptive string like "2 years 3 months ago". * Handles singular/plural years/months/days. * If the given date is in the future, returns the string "today". */ export function getHumanReadableTime(dateString) { const startDate = new Date(dateString); const currentDate = new Date(); const years = currentDate.getFullYear() - startDate.getFullYear(); const months = currentDate.getMonth() - startDate.getMonth(); const days = currentDate.getDate() - startDate.getDate(); let timeAgoDescription = ''; if (years > 0) { timeAgoDescription += `${years} ${years === 1 ? 'year' : 'years'}`; } if (months > 0) { if (timeAgoDescription !== '') { timeAgoDescription += ' '; } timeAgoDescription += `${months} ${months === 1 ? 'month' : 'months'}`; } if (days > 0) { if (timeAgoDescription !== '') { timeAgoDescription += ' '; } timeAgoDescription += `${days} ${days === 1 ? 'day' : 'days'}`; } if (timeAgoDescription === '') { timeAgoDescription = 'today'; } else { timeAgoDescription += ' ago'; } return timeAgoDescription; } ``` For such an example, a unit test that checks the basic validation of expected output for recent dates, older dates, current date, and invalid inputs is required. As shown in the demo above, Cody intelligently generated a unit test for you with just a single command. Also, note that with the available context, Cody analyzed the codebase and automatically learned that you are using Vite. So, it tailored the test to be compatible with Vite's testing framework. This shows how Cody leverages context to provide relevant, framework-specific unit tests. If the generated test does not meet your requirements, you can continue the chat thread with follow-up questions, and Cody will help you create a unit test of your choice. ## Trying out the generated test Let's run the generated test to see whether it works or not. Inside your terminal, run the following command: ```bash npm run test ``` All the tests should pass, and you should see the following output: That's it! You've successfully generated a unit test for your codebase using Cody.In this guide, you'll learn how to use Cody to build a user interface for your front-end apps.
## WIP That's it! You've successfully built a front-end using Cody.Learn how to configure Cody via `modelConfiguration` on a Sourcegraph Enterprise instance.
This section includes examples about how to configure Cody to use Sourcegraph-provided models with `modelConfiguration`. These examples will use the following:
- [Minimal configuration](/cody/enterprise/model-configuration#configure-sourcegraph-provided-models) - [Using model filters](/cody/enterprise/model-configuration#model-filters) - [Change default models](/cody/enterprise/model-configuration#default-models) ## Sourcegraph-provided models and BYOK (Bring Your Own Key) By default, Sourcegraph is fully aware of several models from the following providers: - "anthropic" - "google" - "fireworks" - "mistral" - "openai" ### Override configuration of a model provider Instead of Sourcegraph using its own servers to make LLM requests, it is possible to bring your own API keys for a given model provider. For example, if you wish for all Anthropic API requests to go directly to your own Anthropic account and use your own API keys instead of going via Sourcegraph's servers, you could override the `anthropic` provider's configuration: ```json { "cody.enabled": true, "modelConfiguration": { "sourcegraph": {}, "providerOverrides": [ { "id": "anthropic", "displayName": "Anthropic BYOK", "serverSideConfig": { "type": "anthropic", "accessToken": "token", "endpoint": "https://api.anthropic.com/v1/messages" } } ], "defaultModels": { "chat": "anthropic::2024-10-22::claude-3.5-sonnet", "fastChat": "anthropic::2023-06-01::claude-3-haiku", "codeCompletion": "fireworks::v1::deepseek-coder-v2-lite-base" } } ``` In the configuration above: - Enable Sourcegraph-provided models and do not set any overrides (note that `"modelConfiguration.modelOverrides"` is not specified) - Route requests for Anthropic models directly to the Anthropic API (via the provider override specified for "anthropic") - Route requests for other models (such as the Fireworks model for "autocomplete") through Cody Gateway ### Partially override provider config in the namespace If you want to override the provider config for some models in the namespace and use the Sourcegraph-configured provider config for the rest, you can route requests directly to the LLM provider (bypassing the Cody Gateway) for some models while using the Sourcegraph-configured provider config for the rest. Example configuration ```json { "cody.enabled": true, "modelConfiguration": { "sourcegraph": {}, "providerOverrides": [ { "id": "anthropic-byok", "displayName": "Anthropic BYOK", "serverSideConfig": { "type": "anthropic", "accessToken": "token", "endpoint": "https://api.anthropic.com/v1/messages" } } ], "modelOverrides": [ { "modelRef": "anthropic-byok::2023-06-01::claude-3.5-sonnet", "displayName": "Claude 3.5 Sonnet", "modelName": "claude-3-5-sonnet-latest", "capabilities": ["edit", "chat"], "category": "accuracy", "status": "stable", "contextWindow": { "maxInputTokens": 45000, "maxOutputTokens": 4000 } }, ], "defaultModels": { "chat": "anthropic-byok::2023-06-01::claude-3.5-sonnet", "fastChat": "anthropic::2023-06-01::claude-3-haiku", "codeCompletion": "fireworks::v1::deepseek-coder-v2-lite-base" } } ``` In the configuration above, we: - Enable Sourcegraph-supplied models (the `sourcegraph` field is not empty or `null`) - Define a new provider with the ID `"anthropic-byok"` and configure it to use the Anthropic API - Since this provider is unknown to Sourcegraph, no Sourcegraph-supplied models are available. Therefore, we add a custom model in the `"modelOverrides"` section - Use the custom model configured in the previous step (`"anthropic-byok::2024-10-22::claude-3.5-sonnet"`) for `"chat"`. Requests are sent directly to the Anthropic API as set in the provider override - For `"fastChat"` and `"autocomplete"`, we use Sourcegraph-provided models via Cody Gateway ## Config examples for various LLM providers Below are configuration examples for setting up various LLM providers using BYOK. These examples are applicable whether or not you are using Sourcegraph-supported models. - In this section, all configuration examples have Sourcegraph-provided models disabled. Please refer to the previous section to use a combination of Sourcegraph-provided models and BYOK. - Ensure that at least one model is available for each Cody feature ("chat" and "autocomplete"), regardless of the provider and model overrides configured. To verify this, [view the configuration](/cody/enterprise/model-configuration#view-configuration) and confirm that appropriate models are listed in the `"defaultModels"` section.Along with the core features, Cody Enterprise offers additional features to enhance your coding experience.
## IDE token expiry Site administrators can set the duration of access tokens for users connecting Cody from their IDEs (VS Code, JetBrains, etc.). This can be configured from the **Site admin** page of the Sourcegraph Enterprise instance. Available options include **7, 14, 30, 60, and 90 days**.  ## GuardrailsLearn how to configure Cody via `completions` on a Sourcegraph Enterprise instance.
Learn about Cody's token limits and how to manage them.
For all models, Cody allows up to **4,000 tokens of output**, which is approximately **500-600** lines of code. For Claude 3 Sonnet models, Cody tracks two separate token limits: - The @-mention context is limited to **30,000 tokens** (~4,000 lines of code) and can be specified using the @-filename syntax. The user explicitly defines this context, which provides specific information to Cody. - Conversation context is limited to **15,000 tokens**, including user questions, system responses, and automatically retrieved context items. Apart from user questions, Cody generates this context automatically. All other models are currently capped at **7,000 tokens** of shared context between the `@-mention` context and chat history. Here's a detailed breakdown of the token limits by model:Learn how Cody makes use to Keyword Search to gather context.
Keyword search is the traditional approach to text search. It splits content into terms and builds a mapping from terms to documents. At query time, it extracts terms from the query and uses the mapping to retrieve your documents. Both Cody chat and completions use Keyword Search. It comes out of the box without any additional setup. Cody with Keyword Search searches your local VS Code workspace and is a cost-effective and time-saving solution. For an enterprise admin who has set up Cody with a Code Search instance, developers on their local machines can seamlessly access it.Learn about all the core concepts and fundamentals that helps Cody provide codebase-aware answers.
[Cody Enterprise](/cody/clients/enable-cody-enterprise) can be deployed via the Sourcegraph Cloud or on your self-hosted infrastructure. This page describes the architecture diagrams for Cody deployed in different Sourcegraph environments.
{/* Figma source: https://www.figma.com/file/lAPHpdhtEmOJ22IQXVZ0vs/Cody-architecture-diagrams-SQS-draft-2024-04?type=whiteboard&node-id=0-1&t=blg78H2YXXbdGSPc-0 */} ## Sourcegraph Cloud deployment This is a recommended deployment for Cody Enterprise. It uses the Sourcegraph Cloud infrastructure and Cody gateway.Learn how you can use embeddings with Cody for better code understanding.
Understand how context helps Cody write more accurate code.
Context refers to any additional information provided to help Cody understand and write code relevant to your codebase. While LLMs have extensive knowledge, they lack context about an individual or organization's codebase. Cody's ability to provide context-aware code responses is what sets it apart. ## Why is context important? Context and [methods of retrieving context](#context-sources) are crucial to the quality and accuracy of AI. Cody relies on its ability to retrieve context from user codebases to provide reliable and accurate answers to developers’ questions. When Cody has access to the most relevant context about your codebase, it can: - Answer questions about your codebase - Produce unit tests and docs - Generate code that aligns with the libraries and style of your codebase - Significantly reduce your work that's required to translate LLM-provided answers into actionable value for your users ## Context sources Cody uses a variety of sources to retrieve context relevant to the user input. These sources include: - **Keyword Search**: A traditional text-based search method that finds keywords matching the user input. When needed, queries are automatically rewritten to include more relevant terms. - **Sourcegraph Search**: The powerful native Sourcegraph Search API. Queries are sent to the SG instance (managed or self-hosted), and search is done using the SG search stack. Relevant documents are returned to the user IDE for use by the LLM. - **Code Graph**: Analyzing the structure of the code, Cody examines how components are interconnected and used, finding context based on code elements' relationships. All these methods collectively ensure Cody's ability to provide relevant and high-quality context to enhance your coding experience. ## Cody context fetching features Cody uses @-mentions to retrieve context from your codebase. Inside the chat window, there is an `@` icon that you can click to select a context source. Alternatively, you can press `@` to open the context picker. Based on your Cody tier, you can @-mention the following: | **Tier** | **Client** | **Files** | **Symbols** | **Web URLs** | **Remote Files/Directories** | **OpenCtx** | | -------------- | ------------- | --------- | ----------- | ------------ | ---------------------------- | ----------- | | **Free/Pro** | VS Code | ✅ | ✅ | ✅ | ❌ | ✅ | | | JetBrains | ✅ | ❌ | ✅ | ❌ | ❌ | | | Visual Studio | ✅ | ✅ | ✅ | ❌ | ❌ | | | Cody Web | ✅ | ✅ | ✅ | ❌ | ❌ | | **Enterprise** | VS Code | ✅ | ✅ | ✅ | ✅ | ✅ | | | JetBrains | ✅ | ❌ | ✅ | ✅ | ❌ | | | Visual Studio | ✅ | ✅ | ✅ | ✅ | ✅ | | | Cody Web | ✅ | ✅ | ✅ | ✅ | ❌ | ## Repo-based context Cody supports repo-based context. You can link single or multiple repositories based on your tier. Here's a detailed breakdown of the number of repositories supported by each client for Cody Free, Pro, and Enterprise users: | **Tier** | **Client** | **Repositories** | | -------------- | ------------- | ---------------- | | **Free/Pro** | VS Code | 1 | | | JetBrains | 1 | | | Visual Studio | 1 | | **Enterprise** | Cody Web | Multi | | | VS Code | Multi | | | JetBrains | Multi | | | Visual Studio | Multi | ## How does context work with Cody prompts? Cody works in conjunction with an LLM to provide codebase-aware answers. The LLM is a machine learning model that generates text in response to natural language prompts. However, the LLM needs to inherently understand your codebase or specific coding requirements. Cody bridges this gap by generating context-aware prompts. A typical prompt has three parts: - **Prefix**: An optional description of the desired output, often derived from predefined [Prompts](/cody/capabilities/commands) that specify tasks the LLM can perform - **User input**: The information provided, including your code query or request - **Context**: Additional information that helps the LLM provide a relevant answer based on your specific codebase ## Impact of context LLM vs Cody When the same prompt is sent to a standard LLM, the response may need more specifics about your codebase. In contrast, Cody augments the prompt with context from relevant code snippets, making the answer far more specific to your codebase. This difference underscores the importance of context in Cody's functionality. ## Manage Cody context window size While Cody aims to provide maximum context for each prompt, there are limits to ensure efficiency. For more details, see our [docs on token limits](/cody/core-concepts/token-limits). Site administrators can update the maximum context window size to meet their specific requirements. While using fewer tokens is a cost-saving solution, it can also cause errors. For example, using an `edit` or `describe` type prompts with a too small context window size might give you errors like `You've selected too much code`. Using more tokens usually produces higher-quality responses but also increases response times. In general, it's recommended not to modify the token limit. However, if needed, you can set it to a value that should not compromise quality or generate errors.Learn how Cody Gateway powers the default Sourcegraph provider for completions, enabling Cody features for Sourcegraph Enterprise customers.
Understand what is Code Graph and how Cody use it to gather context.
Code Graph is a key component of Cody's capacity to generate contextual responses based on your codebase. It involves analyzing the structure of the code rather than treating it as plain text. Cody examines how different components of the codebase are interconnected and how they are used. This method is dependent on the code's structure and inheritance relationships. It can help Cody find context related to your input based on how code elements are linked and utilized. ## Code Graph data Code graph data refers to the information that describes various semantic elements within your source code, like definitions, references, symbols, and doc comments. This dataset is produced by an indexer and subsequently transferred to a Sourcegraph instance. The process of generating this data can vary based on factors such as the programming language and build system in use. In some cases, Sourcegraph can automatically create this data through auto-indexing within the platform itself. Alternately, you may need to set up a periodic CI job, specifically designed to produce and upload this index to your Sourcegraph instance.Learn how to use Cody and its features with the VS Code editor.
The Cody extension by Sourcegraph enhances your coding experience in VS Code by providing intelligent code suggestions, context-aware autocomplete, and advanced code analysis. This guide will walk you through the steps to install and set up Cody within your VS Code environment.Learn how to use Cody and its features with the Visual Studio editor.
Learn how to use Cody and its features with the Neovim editor.
Learn how to use Cody and its features with JetBrains editors.
The Cody plugin by Sourcegraph enhances your coding experience in your IDE by providing intelligent code suggestions, context-aware completions, and advanced code analysis. This guide will walk you through the steps to install and set up Cody within your JetBrains environment.Learn how to use Cody and its features with the Eclipse editor.
Learn how to install the cody
command-line tool and using the cody chat
subcommand.
There are multiple ways to use Cody: you can install its extension in your favorite IDEs, access it via the Sourcegraph web app, or use it through the Cody CLI.
This document compares features and capabilities across different clients and platforms like VS Code, JetBrains, and Sourcegraph.com (Web UI).
## Chat | **Feature** | **VS Code** | **JetBrains** | **Visual Studio** | **Web** | **CLI** | | ---------------------------------------- | ----------- | ------------- | ----------------- | -------------------- | ------- | | Chat | ✅ | ✅ | ✅ | ✅ | ✅ | | Chat history | ✅ | ✅ | ✅ | ✅ | ❌ | | Clear chat history | ✅ | ✅ | ✅ | ✅ | ❌ | | Edit sent messages | ✅ | ✅ | ✅ | ✅ | ❌ | | SmartApply/Execute | ✅ | ❌ | ❌ | ❌ | ❌ | | Show context files | ✅ | ✅ | ✅ | ✅ | ❌ | | @-file | ✅ | ✅ | ✅ | ✅ | ❌ | | @-symbol | ✅ | ❌ | ✅ | ✅ | ❌ | | LLM Selection | ✅ | ✅ | ✅ | ✅ | ❌ | | **Context Selection** | | | | | | | Single-repo context | ✅ | ✅ | ✅ | ✅ | ❌ | | Multi-repo context | ❌ | ❌ | ❌ | ✅ (public code only) | ❌ | | Local context | ✅ | ✅ | ✅ | ❌ | ✅ | | OpenCtx context providers (experimental) | ✅ | ❌ | ❌ | ❌ | ❌ | | **Prompts** | | | | | | | Access to prompts and Prompt library | ✅ | ✅ | ✅ | ✅ | ❌ | | Promoted Prompts | ✅ | ❌ | ❌ | ✅ | ❌ | ## Code Autocomplete | **Feature** | **VS Code** | **JetBrains** | **Visual Studio** | | --------------------------------------------- | ----------- | ------------- | ----------------- | | Single and multi-line autocompletion | ✅ | ✅ | ✅ | | Cycle through multiple completion suggestions | ✅ | ✅ | ✅ | | Accept suggestions word-by-word | ✅ | ❌ | ❌ | Few exceptions that apply to Cody Pro and Cody Enterprise users:Cody enhances your coding experience by providing intelligent code suggestions, context-aware completions, and advanced code analysis. These docs will help you use Cody on your Sourcegraph Enterprise instance.
Learn how to use Cody in the web interface with your Sourcegraph.com instance.
In addition to the Cody extensions for [VS Code](/cody/clients/install-vscode), [JetBrains](/cody/clients/install-jetbrains), and [Visual Studio](/cody/clients/install-visual-studio ) IDEs, Cody is also available in the Sourcegraph web app. Community users can use Cody for free by logging into their accounts on Sourcegraph.com, and enterprise users can use Cody within their Sourcegraph instance.This page lists all the query types that will return search results with the Sourcegraph chat.
This documentation helps you set up HTTP, HTTPS, and SOCKS proxies in VS Code and JetBrains IDEs. It also includes instructions for handling self-signed certificates on macOS and Windows.
Learn how prompts can automate and accelerate your workflow with Cody.
Cody offers quick, ready-to-use **Prompts** to automate key tasks in your workflow. Prompts are created and saved in the **Prompt Library** and can be accessed from the top navigation bar in the Sourcegraph.com instance. To run Prompts and access Prompt Library, you must have the following: - Free account on Sourcegraph.com or Sourcegraph Enterprise instance with Cody enabled - Cody extension installed in your IDE (VS Code, JetBrains, Visual Studio) ## Prompt Library The **Prompt Library** allows you to create, edit, share, and save prompts you’ve created or shared within your organization. You can also search for prompts, filter the list to find a specific prompt by the owner, and sort by name or updated recently. Go to **Tools > Prompt Library** from the top navigation bar in the Sourcegraph.com instance. Alternatively, you can access the **Prompt Library** from the **Cody** extension in your IDE, which directs you to the Prompt Library page. Here, you can view all prompts (shared with you in an organization or created by you) and some core (built-in) prompts to help you get started.  ### Core (built-in) prompts The core (built-in) prompts are available to all users. These prompts are designed to help you start with Cody and provide a starting point for your own prompts. You get the following core (built-in) prompts: - document-code - explain-code - find-code-smells - generate-unit-tests You can run these prompts by clicking the **play** icon next to the prompt name, **copy the prompt permalink**, or **duplicate the prompt** to make a copy and edit it according to your needs. ## Create prompts Click the **New prompt** button from the **Prompt Library** page. - Select the **Owner** and **Prompt Name** - Write a prompt description - Next, fill out the **Prompt template** box with all your prompt instructions - You can also add dynamic context that will allow your prompt to use content from different sources like current selection and current file - Select the visibility of the prompt, either **Public** or **Private** - You can mark your prompt as **draft**. Draft prompts are not visible to other users until you publish it - Choose the mode of the prompt, whether it will be **Chat only** or can **Edit code** - Once done, click the **Create prompt** button There are also a few advanced options that you can configure.  ### Draft prompts You can mark your prompt as a draft. A draft prompt is not visible to everyone. You can only view, run, and edit your draft prompts until you publish it. ## View prompts The new prompt is added to the Prompt Library page and appears in the Prompts list in the Cody chat panel (both in the editor and on the web). If your prompt does not appear in the Prompts list, make sure you've logged in with the same account you used to create it. Once the prompt is visible, it's ready to be used by: - **The prompt's owner** if it is a user - **All members of the organization** if the prompt's owner is an organization - **Everyone** if the prompt is marked **Public** (which only site admins can do)  ## Edit prompts To edit a prompt, click the Edit button next to the prompt in the Prompt Library and make the necessary changes. You can also use this interface to **Transfer Ownership** of the prompt or delete it from this view. ## Prompt tagsUse additional context sources from outside of your codebase by leveraging OpenCtx providers.
Cody offers a rich set of capabilities and features that help you write better code faster. Learn and understand more about Cody's features and core AI functionality in this section.
You can control and manage what context from your codebase is used by Cody. You can do this by using Cody Context Filters.
Learn how Cody helps you identify errors in your code and provides code fixes.
Cody is optimized to identify and fix errors in your code. Its debugging capability and autocomplete suggestions can significantly accelerate your debugging process, increasing developer productivity. All Cody IDE extensions (VS Code, JetBrains) support code debugging and fixes capabilities. ## Use chat for code fixes When you encounter a code error, you can use the chat interface and ask Cody about it. You can paste the faulty code snippets directly in the chat window, and Cody will provide a fix. The suggestions can be a corrected code snippet you can copy and paste into your code. Or you can ask a follow-up question for additional context to help debug the code. Moreover, you can paste an error message in the chat and ask Cody to provide a list of possible solutions. Let's look at a simple example to understand how Cody can help you debug your code. The following code snippet should print the sum of two numbers. ```js function sum(a, b) { var result = a + b; console.log('The sum is: ' + $result); } sum(3 , 4); ``` When you try to `console.log` the `result`, it does not print the correct summed value. Cody can help you both identify the error and provide a solution to fix it. Let's debug the code snippet. Paste the code snippet inside the Cody chat window and ask Cody to fix it. In addition, Cody helps you reduce the chances of getting syntax and typo errors. The Cody IDE extensions provide context-aware suggestions based on your codebase, helping you avoid common mistakes and reduce debugging time. ## Detecting code smell Cody can detect early code smells to ensure coding best practices and quality and provide suggestions to improve your code. By detecting such potential errors early on, you can avoid scalability and code maintainability issues that might arise in the future. You can detect code smells by the **find-code-smells** prompt from the Prompts drop-down menu in the chat panel. If you want to refine your debugging process, you can create a new prompt from the Prompt Library and use it to debug your code. ## Code ActionsChat with the AI assistant in your code editor or via the Sourcegraph web app to get intelligent suggestions, code autocompletions, and contextually aware answers.
Learn how Cody helps you get contextually-aware autocompletion for your codebase.
Cody predicts what you're trying to write before you even type it. It offers single-line and multi-line suggestions based on the provided code context, ensuring accurate autocomplete suggestions. Cody autocomplete supports a [wide range of programming languages](/cody/faq#what-programming-languages-does-cody-support) because it uses LLMs trained on broad data. Code autocompletions are optimized for both server-side and client-side performance, ensuring seamless integration into your coding workflow. The **default** autocomplete model for Cody Free, Pro, and Enterprise users is **[DeepSeek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)**, which significantly helps boost both the responsiveness and accuracy of autocomplete. ## Cody's autocomplete capabilities The autocompletion model is designed to enhance speed, accuracy, and the overall user experience that offers: - **Increased speed and reduced latency**: The P75 latency is reduced by 350 ms, making the autocomplete function faster - **Improved accuracy for multi-line completions**: Completions across multiple lines are more relevant and accurately aligned with the surrounding code context - **Higher completion acceptance rates**: The average completion acceptance rate (CAR) is improved by more than 4%, providing a more intuitive user interaction ## How does autocomplete work? First, you'll need the following setup: - A Free or Pro account via Sourcegraph.com or a Sourcegraph Enterprise instance - A supported editor extension (VS Code, JetBrains, Visual Studio) The autocomplete feature is enabled by default on all IDE extensions, i.e., VS Code and JetBrains. Generally, there's a checkbox in the extension settings that confirms whether the autocomplete feature is enabled or not. In addition, some autocomplete settings are optionally and explicitly supported by some IDEs. For example, JetBrains IDEs have settings that allow you to customize colors and styles of the autocomplete suggestions. When you start typing, Cody will automatically provide suggestions and context-aware completions based on your coding patterns and the code context. These autocomplete suggestions appear as grayed text. Press the `Enter` or `Tab` to accept the suggestion. ## Configure autocomplete on an Enterprise Sourcegraph instanceAuto-edit suggests code changes by analyzing cursor movements and typing. After you've made at least one character edit in your codebase, it begins proposing contextual modifications based on your cursor position and recent changes.
Learn about the agentic chat experience, an exclusive chat-based AI agent with enhanced capabilities.
Keep on top of events in your codebase
Watch your code with code monitors and trigger actions to run automatically in response to events. ## Getting startedAnything you can search, you can track and analyze
Learn how to search code across all your repositories and code hosts.
**Code Search** allows you to find, fix, and navigate code with any code host or language across multiple repositories with real-time updates. It deeply understands your code, prioritizing the most relevant results for an enhanced search experience. Sourcegraph's Code Search empowers you to: - Utilize regular expressions, boolean operations, and keyboard shortcuts to help you unleash the full potential of your searches - With the symbol, commit, and diff search capabilities, it identifies code vulnerabilities in milliseconds and quickly helps you resolve issues and incidents - Offers innovative code view with seamless code navigation for a comprehensive coding experience ## Getting startedLearn and understand more about Sourcegraph's Code Search features and core functionality.
Learn about some of the most commonly asked questions about Code Search.
## Code Search ### Does Code Search work with my repositories? Code Search works with all your repositories. Likewise, you can also [search through our public code](https://sourcegraph.com/search) that has a 2 million+ open source codebase. ### Who can search my code? Public code is searchable by anyone, but your private code can be searched only by users who have access to it. ### Do I need to enable Code Navigation? No, the default search-based code navigation works out of the box without any configuration. However, for an advanced and customized navigation experience your site admin will set up precise code navigation. ### What is the max file size limit for Code Search? By default, files larger than **1 MB** are excluded from search results. ### What programming languages are supported? Code Search supports almost all programming languages: Java, Python, Go, JavaScript, TypeScript, C#/C/C++, Swift, Objective-C, Kotlin, Ruby, Scala, Rust, Perl, Dart, Erlang, COBOL, Clojure, Lisp, Shell, Terraform, Lua, GraphQL, Thrift, Protobuf, YAML, JSON, Jsonnet, R, PHP, Elixir, Haskell, PowerShell, OCaml, CUDA, Pascal, Verilog, VHDL, Groovy, and Tcl. ### What deployment options are available with Code Search? Code Search supports the following deployment options: Kubernetes cluster, Amazon EKS or EC2, Google GKE, Microsoft, Azure AKS, Docker Compose, and Docker Compose in GCP. Read more about our [deployment docs here](/admin/deploy). ## Code Navigation ## Why are my results sometimes incorrect? If an index is not found for a particular file in a repository, Sourcegraph will fall back to search-based code navigation. You may occasionally see results from search-based code navigation even when you have uploaded an index. This can happen in the following scenarios: The line containing the symbol was created or edited between the nearest indexed commit and the commit being browsed. The Find references panel may include search-based results, but only after all of the precise results have been displayed. This ensures every symbol has useful code navigation. ## What languages are supported? Search-based code navigation supports 40 programming languages, including all of the most popular ones: Apex, Clojure, Cobol, C++, C#, CSS, Cuda, Dart, Elixir, Erlang, Go, GraphQL, Groovy, Haskell, Java, JavaScript, Jsonnet, Kotlin, Lisp, Lua, OCaml, Pascal, Perl, PHP, PowerShell, Protobuf, Python, R, Ruby, Rust, Scala, Shell, Starlark, Strato, Swift, Tcl, Thrift, TypeScript, Verilog, VHDL. ### Why does it sometimes time out? The [symbol search](/code-search/types/symbol) performance section describes query paths and performance. Consider using [Rockskip](/code-search/code-navigation/rockskip) if you're experiencing frequent timeouts.This page provides docs about how Search Snippets work with Sourcegraph.
Every project and team has a different set of repositories they commonly work with and queries they perform regularly. Custom search snippets enable users and organizations to quickly filter existing search results with search fragments matching those use cases. A search snippet is any valid query. For example, a search snippet that defines all repositories in the "example" organization would be `repo:^github\.com/example/`. After adding this snippet to your settings, it would appear in the search snippet panel in the search sidebar under a label of your choosing (as of v3.29).This page provides docs about using Search Subexpressions in Sourcegraph's Code Search.
Search subexpressions combine groups of [filters](/code-search/queries#filters-all-searches) like `repo:` and [operators](/code-search/queries#boolean-operators) like `or`. Compared to [basic examples](/code-search/queries/examples), search subexpressions allow more sophisticated queries. Here are examples of how they can help you: 1. [Noncompliant spelling where case-sensitivity differs depending on the word](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+%28%28Github+case:yes%29+or+%28organisation+case:no%29%29&patternType=keyword). ```sgquery repo:sourcegraph ((Github case:yes) or (organisation case:no)) ``` The about code finds places to change the spelling of `Github` to `GitHub` (case-sensitivity matters) or change the spelling of `organisation` to `organization` (case-sensitivity does not matter). 2. [Search for either a file name or file contents scoped to the same repository](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+-file:html+%28file:router+or+newRouter%29&patternType=keyword). ```sgquery repo:sourcegraph -file:html (file:router or newRouter) ``` The above code finds both files containing the word `router` or file contents matching `newRouter` in the same repository, while excluding `html` files. Useful when exploring files or code that interact with a general term like `router`. 3. [Scope file content searches to particular files or repositories](https://sourcegraph.com/search?q=+repo:sourcegraph+%28%28file:schema%5C.graphql+hover%28...%29%29+or+%28file:codeintel%5C.go+%28Line+or+Character%29%29%29&patternType=structural) ```sgquery repo:sourcegraph ( (file:schema.graphql hover(...)) or (file:codeintel.go (Line or Character)) ) patterntype:structural ``` Combine matches of `hover(...)` in the `schema.graphql` file and matches of `Line` or `Character` in the `codeintel.go` file in the same repository. Useful for crafting queries that precisely match related fragments of a codebase to capture context and e.g., share with coworkers.This page provides docs about Search Contexts in Sourcegraph's Code Search.
Search Contexts help you search the code you care about on Sourcegraph. A search context represents a set of repositories at specific revisions on a Sourcegraph instance that will be targeted by search queries by default. Every search on Sourcegraph uses a search context. Search contexts can be defined with the contexts selector shown in the search input, or entered directly in a search query. ## Available contexts **Sourcegraph.com** supports a [set of predefined search contexts](https://sourcegraph.com/contexts?order=spec-asc&visible=17&owner=all) that include: - The global context, `context:global`, which includes all repositories on Sourcegraph.com. - Search contexts for various software communities like [CNCF](https://sourcegraph.com/search?q=context:CNCF), [crates.io](https://sourcegraph.com/search?q=context:crates.io), [JVM](https://sourcegraph.com/search?q=context:JVM), and more. If no search context is specified, `context:global` is used. **Private Sourcegraph instances** support custom search contexts: - Contexts owned by a user, such as `context:@username/context-name`, which can be private to the user or public to all users on the Sourcegraph instance. - Contexts at the global level, such as `context:example-context`, which can be private to site admins or public to all users on the Sourcegraph instance. - The global context, `context:global`, which includes all repositories on the Sourcegraph instance. ## Using search contexts The search contexts selector is shown in the search input. All search queries will target the currently selected search context. To change the current search context, press the contexts selector. All of your search contexts will be shown in the search contexts dropdown. Select or use the filter to narrow down to a specific search context. Selecting a different context will immediately re-run your current search query using the currently selected search context. Search contexts can also be used in the search query itself. Type `context:` to begin defining the context as part of the search query. When a context is defined in the search query itself, it overrides the context shown in the context selector. You can also search across multiple contexts at once using the `OR` [boolean operator](/code-search/queries#boolean-operators). For example: `(context:release1 OR context:release2 OR context:release3) someTerribleBug` ## Organizing search contexts To organize your search contexts better, you can use a specific context as your default and star any number of contexts. This affects what context is selected when loading Sourcegraph and how the list of contexts is sorted. ### Default context Any authenticated user can use a search context as their default. To set a default, go to the search context management page, open the "..." menu for a context, and click on "Use as default". If the user doesn't have a default, `global` will be used. If a user ever loses access to their default search context (eg. the search context is made private), they will see a warning at the top of the search contexts dropdown menu list and `global` will be used. If a user's default search context is deleted, `global` will immediately be set as their default. The default search context is always selected when loading the Sourcegraph webapp. The one exception is when opening a link to a search query that does not contain a `context:` filter, in which case the `global` context will be used. ### Starred contexts Any authenticated user can star a search context. To star a context, click on the star icon in the search context management page. This will cause the context to appear near the top of their search contexts list. The `global` context cannot be starred. ### Sort order The order of search contexts in the search results dropdown menu list and in the search context management page is always the following: - The `global` context first - The user's default context, if set - All of the user's starred contexts - Any remaining contexts available ## Creating search contexts When search contexts are [enabled on your private Sourcegraph instance](/code-search/features#search-contexts), you can create your own search contexts. A search context consists of a name, description, and a set of repositories at one or many revisions. Contexts can be owned by a user, and can be private to the user or public to all users on the Sourcegraph instance. Contexts can also be at the global instance level, and can be private to site admins or public to all users on the Sourcegraph instance. ### Creating search contexts from header navigation - Go to **User menu > Search contexts** in the top navigation bar. - Press the **+ Create search context** button. - In the **Owner** field, choose whether you will own the context or if it will be global to the Sourcegraph instance. **Note**: At present, the owner of a search context cannot be changed after being created. - In the **Context name** field, type in a short, semantic name for the context. The name can be 32 characters max, and contain alphanumeric and `.` `_` `/` `-` characters. - Optionally, enter a **Description** for the context. Markdown is supported. - Choose the **Visibility** of this context. - Public contexts are available to everyone on the Sourcegraph instance. Note that private repositories will only be visible to users that have permission to view the repository via the code host. - Private contexts can only be viewed by their owner, or in the case being globally owned, by site admins. - In the **Repositories and revisions** configuration, define which repositories and revisions should be included in the search context. Press **Add repository** to quickly add a template to the configuration. - Define repositories with valid URLs. - Define revisions as strings in an array. To specify a default branch, use `"HEAD"`. For example: ```json [ { "repository": "github.com/sourcegraph/sourcegraph", "revisions": [ "3.15" ] }, { "repository": "github.com/sourcegraph/src-cli", "revisions": [ "3.11.2" ] } ] ``` - Press **Test configuration** to validate the repositories and revisions. - Press **Create search context** to finish creating your search context. You will be returned to the list of search contexts. Your new search context will appear in the search contexts selector in the search input, and can be [used immediately](#using-search-contexts). ## Query-based search contexts As of release 3.36, search contexts can be defined with a restricted search query as an alternative to a specific list of repositories and revisions. Allowed filters are: `repo`, `rev`, `file`, `lang`, `case`, `fork`, and `visibility`. `OR` and `AND` expressions are also allowed.This page provides docs about how Saved Searches work with Sourcegraph.
Saved Searches lets you reuse and share search queries. You can create a saved search for anything, including diffs and commits across all branches of your repositories. Saved Searches functionality is available to both Free and Enterprise Code Search users. To access or create new Saved Searches in the Sourcegraph web app, click the **Tools > Saved Searches** in the top navigation bar.  ## Creating saved searches To create a new saved search: - Go to the Saved Searches section and click the **New saved search** button - Fill out the description field and enter the search query - While writing the query syntax, ensure to include the `patternType:` field A `patternType:` filter is required in the query for all saved searches. `patternType` can be `keyword`, `standard`, `literal`, or `regexp`. You cannot create a saved search without defining the `patternType:` field.  Enable the checkbox for **Draft** if you don't want other users to use your saved search. This is useful for testing the query before sharing it with others. Once done, click the **Create saved search** button to be redirected to the Saved Searches page. Your saved search will appear with a `Secret` label, which means that only you can view and use it. To let others use your saved search, you need to transfer it to an organization and ask the site admin to make it public. In addition, you can also search within your saved searches and sort your saved searches by name, recently updated, and description.  ### Transfer ownership To transfer ownership of a saved search, click the **Edit** button next to it, click the **Transfer ownership** button, and select the organization to which you want to transfer the saved search.  ## Example saved searches See the [search examples page](/code-search/queries/examples) for a useful list of searches to save.Learn and understand about Sourcegraph's Structural Search and core functionality.
Learn and understand about Sourcegraph's Fuzzy Search and core functionality.
Use the fuzzy finder to quickly navigate to a repository, symbol, or file. To open the fuzzy finder, press `Cmd+K` (macOS) or `Ctrl+K` (Linux/Windows) from any page. Use the dedicated Repos, Symbols, and Files tabs to search only for a repository, symbol, or file. Each tab has a dedicated shortcut: - **Repos**: Cmd+I (macOS), Ctrl+K (Linux/Windows) - **Symbols**: Cmd+O (macOS), Cmd+Shift+O (macOS Safari), Ctrl+O (Linux/Windows) - **Files**: Cmd+P (macOS), Ctrl+P (Linux/Windows)This page provides a visual breakdown of the Search Query Language with some examples to get you started.
It is complementary to our [Search Query Syntax](/code-search/queries) and illustrates syntax using railroad diagrams instead of tables. ## How to read railroad diagrams? Follow the lines in these railroad diagrams to see how pieces of syntax combine. When a line splits, it means there are multiple options available. When it is possible to repeat a previous syntax, the split line will loop back on itself like this:  ## Basic query  At a basic level, a query consists of [search patterns](#search-pattern) and [parameters](#parameter). Typical queries contain one or more space-separated search patterns that describe what to search, and parameters refine searches by filtering results or changing search behavior. For example, - [`repo:github.com/sourcegraph/sourcegraph file:schema.graphql The result`](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+file:schema.graphql+%22The+result%22&patternType=keyword) ## Expression  Build query expressions by combining [basic queries](#basic-query) and operators like `AND` or `OR`. Group expressions with parentheses to build more complex expressions. If there are no balanced parentheses, `AND` operators bind tighter, so `foo or bar and baz` means `foo or (bar and baz)`. You may also use lowercase `and` or `or`. For example, - [`repo:github.com/sourcegraph/sourcegraph rtr AND newRouter`](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+rtr+AND+newRouter&patternType=keyword) ## Search pattern  A pattern to search. By default, the pattern is searched literally. The kind of search may be toggled to change how a pattern matches: Perform a [regular expression search](/code-search/queries#regular-expression-search). We support [RE2 syntax](https://golang.org/s/re2syntax). Quoting a pattern will perform a literal search. For example, - [`foo.*bar.*baz`](https://sourcegraph.com/search?q=foo.*bar.*baz&patternType=regexp) - [`"foo bar"`](https://sourcegraph.com/search?q=%22foo+bar%22&patternType=regexp) ## Parameter  Search parameters allow you to filter search results or modify search behavior. ### Repo  Search repositories that match the regular expression. A `-` before `repo` excludes the repository. By default, the repository will be searched at the `HEAD` commit of the default branch. You can optionally change the [revision](#revision). For example, - [`repo:gorilla/mux testroute`](https://sourcegraph.com/search?q=repo:gorilla/mux+testroute&patternType=regexp) - [`-repo:gorilla/mux testroute`](https://sourcegraph.com/search?q=-repo:gorilla/mux+testroute&patternType=regexp) ### Revision  Search a repository at a given revision, for example, a branch name, commit hash, or Git tag. For example, - [`repo:^github\.com/sourcegraph/sourcegraph$@75ba004 get_embeddings`](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24%4075ba004+get_embeddings+&patternType=keyword) - [`repo:^github\.com/sourcegraph/sourcegraph$ rev:v5.0.0 get_embeddings`](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+rev:v5.0.0+get_embeddings&patternType=keyword) You can search multiple revisions by separating the revisions with `:`. Specify `HEAD` for the default branch. For example, - [`repo:^github\.com/sourcegraph/sourcegraph$ rev:v4.5.0:v5.0.0 disableNonCriticalTelemetry`](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+rev:v4.5.0:v5.0.0+disableNonCriticalTelemetry&patternType=keyword) - [`repo:^github\.com/sourcegraph/sourcegraph$@v4.5.0:v5.0.0 disableNonCriticalTelemetry`](https://sourcegraph.com/search?q=context%3Aglobal+repo%3A%5Egithub%5C.com%2Fsourcegraph%2Fsourcegraph%24%40v4.5.0%3Av5.0.0+disableNonCriticalTelemetry&patternType=keyword) ### Revision at timeThis page describes the query syntax for Code Search.
A typical search consists of two parts: * A [search pattern](#search-patterns) containing the terms you want to search, for example `println` * [Search filters](#filters-all-searches) that scope the search to certain repositories, languages, etc., for example `lang:java` For a graphical view of Sourcegraph's query syntax, see the [search language reference](/code-search/queries/language). ## Search patterns This section documents the search pattern syntax in Sourcegraph. To match file content, you need to specify a search pattern. Search patterns are optional when searching [commits](#filters-diff-and-commit-searches-only), [filenames](#filename-search), or [repository names](#repository-name-search). ### Keyword search (default) Keyword search matches individual terms anywhere in the document or the filename. Use `"..."` to match phrases exactly. Specify regular expressions inside `/.../`. | Search pattern syntax | Description | | ------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------- | | [`foo bar`](https://sourcegraph.com/search?q=foo+bar&patternType=keyword) | Match documents containing both `foo` and `bar` anywhere in the document. | | [`"foo bar"`](https://sourcegraph.com/search?q=%22foo+bar%22&patternType=keyword) | Match the string `foo bar` exactly. The space between the terms is interpreted literally. The quotes are not matched. | | [`"foo \"bar\""`](https://sourcegraph.com/search?q=context:global+%22foo+%5C%22bar%5C%22%22&patternType=keyword) | Match the string `foo "bar"` exactly. | | [`/foo.*bar/`](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+/foo.*bar/&patternType=keyword) | Match the **regular expression** `foo.*bar`. We support [RE2 syntax](https://golang.org/s/re2syntax). | | [`foo OR bar`](https://sourcegraph.com/search?q=context:global+foo+OR+bar&patternType=keyword) | Match documents containing `foo` _or_ `bar` anywhere in the document. |conf.Get( or log15.Error( or after
](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+conf.Get%28+or+log15.Error%28+or+after&patternType=regexp) |
Returns file content matching either on the left or right side, or both (set union). The number of results reports the number of matches of both strings. Note the regex or operator `|` may not work as expected with certain operators for example `file:(internal/repos)|(internal/gitserver)`, to receive the expected results use [subexpressions](/code-search/working/search_subexpressions), `(file:internal/repos or file:internal/gitserver)`
| **Operator** | **Example** |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `not`, `NOT` | [`lang:go not file:main.go panic`](https://sourcegraph.com/search?q=lang:go+not+file:main.go+panic&patternType=keyword), [`panic NOT ever`](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+panic+not+ever&patternType=keyword) |
`NOT` can be used in place of `-` to negate filters, such as `file`, `content`, `lang`, `repohasfile`, and `repo`. For
search patterns, `NOT` excludes documents that contain the term after `NOT`. For readability, you can also include the
`AND` operator before a `NOT` (i.e. `panic NOT ever` is equivalent to `panic AND NOT ever`).
Learn and walkthrough some of the handy Code Search examples.
## Sourcegraph Search Examples on GitHub Pages Check out the [Sourcegraph Search Examples](https://sourcegraph.github.io/sourcegraph-search-examples/) site for filterable search examples with links to results on sourcegraph.com. Below are some additional examples that search repositories on [Sourcegraph.com](https://sourcegraph.com/search), our open source code search solution for GitHub and GitLab. You can copy and adapt the following search queries for use on your company’s private instance.This page describes the process of writing an indexer and details all the recommended indexers that Sourcegraph currently supports.
## Writing an Indexer The following documentation describes the [SCIP Code Intelligence Protocol](https://github.com/sourcegraph/scip) and explains steps to write an indexer to emit SCIP. 1. Familiarize yourself with the [SCIP protobuf schema](https://github.com/sourcegraph/scip/blob/main/scip.proto) 2. Import or generate SCIP bindings 3. Generate minimal index with occurrence information 4. Test your indexer using [scip CLI](https://github.com/sourcegraph/scip/blob/main/docs/CLI.md)'s `snapshot` subcommand 5. Progressively add support for more features with testsThis guide gives specific instructions for troubleshooting code navigation in your Sourcegraph instance.
## When are issues related to code intelligence? Issues are related to Sourcegraph code navigation when the [indexer](/code-search/code-navigation/writing_an_indexer) is one that we build and maintain. A customer issue should **definitely** be routed to code navigation if any of the following are true. - Precise code navigation queries are slow - Precise code navigation queries yield unexpected results A customer issue should **possibly** be routed to code navigation if any of the following are true. - Search-based code navigation queries are slow - Search-based code navigation queries yield unexpected results A customer issue should **not** be routed to code navigation if any of the following are true. - The indexer is listed in [LSIF.dev](https://lsif.dev/) and _it is not_ one that we maintain. Instead, flag the indexers status and maintainer of the relevant indexer with the customer, and suggest they reach out directly ## Gathering evidence Before bringing a code navigation issue to the engineering team, the site-admin or customer engineer should gather the following details. Not all of these details will be necessary for all classes of errors. ### Sourcegraph instance details The following details should always be supplied. - The Sourcegraph instance version - The Sourcegraph instance deployment type (e.g. server, pure-docker, docker-compose, k8s) - The memory, cpu, and disk resources allocated to the following containers: - frontend - precise-code-intel-worker - codeintel-db - blobstore (or `minio` in versions of Sourcegraph before v3.4.2) If the customer is running a custom patch or an insiders version, we need the docker image tags and SHAs of the following containers: - frontend - precise-code-intel-worker ### Sourcegraph CLI details The following details should be supplied if there is an issue with _uploading_ LSIF indexes to their instance. - The Sourcegraph CLI version ```bash $ src version Current version: 3.26.0 Recommended Version: 3.26.1 ``` ### Settings The following user settings should be supplied if there is an issue with _displaying_ code navigation results. Only these settings should be necessary, but additional settings can be supplied after private settings such as passwords or secret keys have been removed. - `codeIntel.lsif` - `codeIntel.traceExtension` - `codeIntel.disableRangeQueries` - `basicCodeIntel.includeForks` - `basicCodeIntel.includeArchives` - `basicCodeIntel.indexOnly` - `basicCodeIntel.unindexedSearchTimeout` You can get your effective user settings (site-config + user settings override) with the following Sourcegraph CLI command. ```bash $ src api -query 'query ViewerSettings { viewerSettings { final } }' ``` If you have [jq](https://stedolan.github.io/jq/) installed, you can unwrap the data more easily. ```bash src api -query 'query ViewerSettings { viewerSettings { final } }' | jq -r '.data.viewerSettings.final' | jq ``` ### Traces [Jaeger](/admin/observability/tracing) traces should be supplied if there is a noticeable performance issue in receiving code navigation results in the SPA. Depending on the type of user operation that is slow, we will need traces for different request types. | **Send traces for requests** | **When latency is high** | | --------------------------------- | ------------------------------------------------------------------- | | `?DefinitionAndHover`, `?Ranges` | between hovering over an identifier and receiving hover text | | `?References` | between querying references and receiving the first result | | `?Ranges` | between hovering over an identifier and getting document highlights | To gather a trace from the SPA, open your browser's developer tools, open the network tab, then add `?trace=1` to the URL and refresh the page. Note that if the URL contains a query fragment (such as `#L24:27`), the query string must go **before** the leading hash. Hovering over identifiers in the source file should fire off requests to the API. Find a request matching the target type (given in the table above). If there are multiple matching requests, prefer the ones with higher latencies. The `x-trace` header should have a URL value that takes you a detailed view of that specific request. This trace is exportable from the Jaeger UI.  Learn and understand about Search-based Code Navigation.
Sourcegraph comes with a default built-in code navigation provided by search-based heuristics. It works out of the box with all of the most popular programming languages. ## How does it work? [Search-based Code Navigation](https://github.com/sourcegraph/sourcegraph-basic-code-intel) provides 3-core navigation features: - **Jump to definition**: Performs a [symbol search](/code-search/features#symbol-search). - **Hover documentation**: First, finds the definition. Then, extracts documentation from comments near the definition. - **Find references**: Performs a case-sensitive word-boundary cross-repository [plain text search](/code-search/features#powerful-flexible-queries) for the given symbol Search-based Code Navigation also filters results by file extension and by imports at the top of the file for some languages. ## What configuration settings can I apply? The symbols container recognizes these environment variables: | **Env Vars** | **Deafult** | **Details** | | ---------------------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------- | | `CTAGS_COMMAND` | `universal-ctags` | Ctags command (should point to universal-ctags executable compiled with JSON and seccomp support) | | `CTAGS_PATTERN_LENGTH_LIMIT` | `250` | The maximum length of the patterns output by ctags | | `LOG_CTAGS_ERRORS` | `false` | Log ctags errors | | `SANITY_CHECK` | `false` | Check that go-sqlite3 works then exit 0 if it's ok or 1 if not | | `SYMBOLS_CACHE_DIR` | `/tmp/symbols-cache` | Directory in which to store cached symbols | | `SYMBOLS_CACHE_SIZE_MB` | `100000` | Maximum size of the disk cache (in megabytes) | | `CTAGS_PROCESSES` | `strconv.Itoa(runtime.GOMAXPROCS(0))` | Number of concurrent parser processes to run | | `REQUEST_BUFFER_SIZE` | `8192` | Maximum size of buffered parser request channel | | `PROCESSING_TIMEOUT` | `2 hrs` | Maximum time to spend processing a repository | | `MAX_TOTAL_PATHS_LENGTH` | `100000` | Maximum sum of lengths of all paths in a single call to git archive | | `USE_ROCKSKIP` | `false` | Enables [Rockskip](/code-search/code-navigation/rockskip) for fast symbol searches and search-based code navigation on repositories specified in `ROCKSKIP_REPOS`, or respositories over `ROCKSKIP_MIN_REPO_SIZE_MB` in size | | `ROCKSKIP_REPOS` | N/A | In combination with `USE_ROCKSKIP=true` this specifies a comma-separated list of repositories to index using [Rockskip](/code-search/code-navigation/rockskip) | | `ROCKSKIP_MIN_REPO_SIZE_MB` | N/A | In combination with `USE_ROCKSKIP=true` all repos that are at least this big will be indexed using Rockskip | | `MAX_CONCURRENTLY_INDEXING` | 4 | Maximum number of repositories being indexed at a time by [Rockskip](/code-search/code-navigation/rockskip) (also limits ctags processes) | The default values for these environment variables come from [`config.go`](https://github.com/sourcegraph/sourcegraph-public-snapshot/blob/eea895ae1a8acef08370a5cc6f24bdc7c66cb4ed/cmd/symbols/config.go#L42-L59).Learn and understand about Precise Code Navigation.
Precise Code Navigation is an opt-in feature that is enabled from your admin settings and requires you to upload indexes for each repository to your Sourcegraph instance. Once setup is complete on Sourcegraph, precise code navigation is available for use across popular development tools: - On the Sourcegraph web UI - On code files on your code host, via [integrations](/integration/) - On diffs in your code review tool, via integrations - Via the [Sourcegraph API](/api/graphql) Sourcegraph automatically uses Precise Code Navigation whenever available, and Search-based Code Navigation is used as a fallback when precise navigation is not available. Precise code navigation relies on the open source [SCIP Code Intelligence Protocol](https://github.com/sourcegraph/scip), which is a language-agnostic protocol for indexing source code. ## Setting up code navigation for your codebaseLearn how to navigate your code and understand its dependencies with high precision.
Code Navigation helps you quickly understand your code, its dependencies, and symbols within the Sourcegraph file view while making it easier to move through your codebase via: - Onboarding to codebases faster with cross-repository code navigation features like [Go to definition](/code-search/code-navigation/features#go-to-definition) and [Find references](/code-search/code-navigation/features#find-references) - Providing complete precise reviews, getting up to speed on unfamiliar code, and determining the impact of code changes with the confidence of compiler-accurate code navigation - Determining the root causes quickly with precise code navigation that tracks references across repositories and package dependencies ## QuicklinksLearn and understand more about Sourcegraph's Code Navigation features and core functionality.
Using our [integrations](/integration/), all code navigation features are available everywhere you read code. This includes browsers and GitHub pull requests.  ## Popover Popovers allow you to quickly glance at the type signature and accompanying documentation of a symbol definition without having to context switch to another source file (which may or may not be available while browsing code).  ## Go to definition When you click on the **Go to definition** button in the popover or click on a symbol's name (in the sidebar or code view), you will be navigated directly to the definition of the symbol.  ## Find references When you select **Find references** from the popover, a panel at the bottom of the page lists all references, definitions, and implementations found for both precise and search-based results (from search heuristics). These docs list supported environment variables for Code Navigation.
## frontend The following are variables are read from the `frontend` service to control code navigation behavior exposed via the GraphQL API. | **Name** | **Default** | **Description** | | ------------------------------------------------- | ------- | ------------------------------------------------------------- | | `PRECISE_CODE_INTEL_DIAGNOSTICS_COUNT_MIGRATION_BATCH_SIZE` | `1000` | The max no. of document records to migrate at a time. | | `PRECISE_CODE_INTEL_DIAGNOSTICS_COUNT_MIGRATION_BATCH_INTERVAL` | `1s` | The timeout between processing migration batches. | | `PRECISE_CODE_INTEL_DEFINITIONS_COUNT_MIGRATION_BATCH_SIZE` | `1000` | The maximum number of definition records to migrate at once. | | `PRECISE_CODE_INTEL_DEFINITIONS_COUNT_MIGRATION_BATCH_INTERVAL` | `1s` | The timeout between processing migration batches. | | `PRECISE_CODE_INTEL_REFERENCES_COUNT_MIGRATION_BATCH_SIZE` | `1000` | The maximum number of reference records to migrate at a time. | | `PRECISE_CODE_INTEL_REFERENCES_COUNT_MIGRATION_BATCH_INTERVAL` | `1s` | The timeout between processing migration batches. | | `PRECISE_CODE_INTEL_DOCUMENT_COLUMN_SPLIT_MIGRATION_BATCH_SIZE` | `100` | The maximum number of document records to migrate at a time. | | `PRECISE_CODE_INTEL_DOCUMENT_COLUMN_SPLIT_MIGRATION_BATCH_INTERVAL` | `1s` | The timeout between processing migration batches. | | `PRECISE_CODE_INTEL_API_DOCS_SEARCH_MIGRATION_BATCH_SIZE` | `1` | The maximum number of bundles to migrate at a time. | | `PRECISE_CODE_INTEL_API_DOCS_SEARCH_MIGRATION_BATCH_INTERVAL` | `1s` | The timeout between processing migration batches. | | `PRECISE_CODE_INTEL_COMMITTED_AT_MIGRATION_BATCH_SIZE` | `100` | The maximum number of upload records to migrate at a time. | | `PRECISE_CODE_INTEL_COMMITTED_AT_MIGRATION_BATCH_INTERVAL` | `1s` | The timeout between processing migration batches. | | `PRECISE_CODE_INTEL_REFERENCE_COUNT_MIGRATION_BATCH_SIZE` | `100` | The maximum number of upload records to migrate at a time. | | `PRECISE_CODE_INTEL_REFERENCE_COUNT_MIGRATION_BATCH_INTERVAL` | `1s` | The timeout between processing migration batches. | The following settings should be the same for the [`precise-code-intel-worker`](#precise-code-intel-worker) service as well. | **Name** | **Default** | **Description** | | ----------------------- | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `PRECISE_CODE_INTEL_UPLOAD_BACKEND` | `Blobstore` | The target file service for code graph uploads. S3, GCS, and Blobstore are supported. In older versions of Sourcegraph (before v3.4.2) `Minio` was also a valid value. | | `PRECISE_CODE_INTEL_UPLOAD_MANAGE_BUCKET` | `false` | Whether or not the client should manage the target bucket configuration | | `PRECISE_CODE_INTEL_UPLOAD_BUCKET` | `lsif-uploads` | The name of the bucket to store LSIF uploads in | | `PRECISE_CODE_INTEL_UPLOAD_TTL` | `168h` | The maximum age of an upload before deletion | The following settings should be the same for the [`codeintel-auto-indexing`](#codeintel-auto-indexing) worker task as well. | **Name** | **Default** | **Description** | | ----------------------------------------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------- | | `PRECISE_CODE_INTEL_AUTO_INDEX_MAXIMUM_REPOSITORIES_INSPECTED_PER_SECOND` | `0` | The maximum number of repositories inspected for auto-indexing per second. Set to zero to disable limit. | | `PRECISE_CODE_INTEL_AUTO_INDEX_MAXIMUM_REPOSITORIES_UPDATED_PER_SECOND` | `0` | The maximum number of repositories cloned or fetched for auto-indexing per second. Set to zero to disable limit. | | `PRECISE_CODE_INTEL_AUTO_INDEX_MAXIMUM_INDEX_JOBS_PER_INFERRED_CONFIGURATION` | `25` | Repositories with a number of inferred auto-index jobs exceeding this threshold will be auto-indexed | ## worker The following are variables are read from the `worker` service to control code graph data behavior run in asynchronous background tasks. ### `codeintel-commitgraph` The following variables influence the behavior of the [`codeintel-commitgraph` worker task](/admin/workers#codeintel-commitgraph). | **Name** | **Default** | **Description** | | ------------------------------------ | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------- | | `PRECISE_CODE_INTEL_MAX_AGE_FOR_NON_STALE_BRANCHES` | `2160h` (about 3 months) | The age after which a branch should be considered stale. Code graph indexes will be evicted from stale branches. | | `PRECISE_CODE_INTEL_MAX_AGE_FOR_NON_STALE_TAGS` | `8760h` (about 1 year) | The age after which a tagged commit should be considered stale. Code graph indexes will be evicted from stale tagged commits. | | `PRECISE_CODE_INTEL_COMMIT_GRAPH_UPDATE_TASK_INTERVAL` | `10s` | The frequency with which to run periodic codeintel commit graph update tasks. | ### `codeintel-auto-indexing` The following variables influence the behavior of the [`codeintel-auto-indexing` worker task](/admin/workers#codeintel-auto-indexing). | **Name** | **Default** | **Description** | | --------------------------------------------- | ------- | ------------------------------------------------------------------------------------------- | | `PRECISE_CODE_INTEL_AUTO_INDEXING_TASK_INTERVAL` | `10m` | The frequency with which to run periodic codeintel auto-indexing tasks. | | `PRECISE_CODE_INTEL_AUTO_INDEXING_REPOSITORY_PROCESS_DELAY` | `24h` | The minimum frequency that the same repository can be considered for auto-index scheduling. | | `PRECISE_CODE_INTEL_AUTO_INDEXING_REPOSITORY_BATCH_SIZE` | `100` | The number of repositories to consider for auto-indexing scheduling at a time. | | `PRECISE_CODE_INTEL_AUTO_INDEXING_POLICY_BATCH_SIZE` | `100` | The number of policies to consider for auto-indexing scheduling at a time. | | `PRECISE_CODE_INTEL_DEPENDENCY_INDEXER_SCHEDULER_POLL_INTERVAL` | `1s` | Interval between queries to the dependency indexing job queue. | | `PRECISE_CODE_INTEL_DEPENDENCY_INDEXER_SCHEDULER_CONCURRENCY` | `1` | The maximum number of dependency graphs that can be processed concurrently. | The following settings should be the same for the [`frontend`](#frontend) service as well. | **Name** | **Default** | **Description** | | ----------------------------------------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------- | | `PRECISE_CODE_INTEL_AUTO_INDEX_MAXIMUM_REPOSITORIES_INSPECTED_PER_SECOND` | `0` | The maximum number of repositories inspected for auto-indexing per second. Set to zero to disable limit. | | `PRECISE_CODE_INTEL_AUTO_INDEX_MAXIMUM_REPOSITORIES_UPDATED_PER_SECOND` | `0` | The maximum number of repositories cloned or fetched for auto-indexing per second. Set to zero to disable limit. | | `PRECISE_CODE_INTEL_AUTO_INDEX_MAXIMUM_INDEX_JOBS_PER_INFERRED_CONFIGURATION` | `25` | Repositories with a number of inferred auto-index jobs exceeding this threshold will be auto-indexed | ### `codeintel-janitor` The following variables influence the behavior of the [`codeintel-janitor` worker task](/admin/workers#codeintel-janitor). | **Name** | **Default** | **Description** | | | ------------------------------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --- | | `PRECISE_CODE_INTEL_UPLOAD_TIMEOUT` | `24h` | The maximum time an upload can be in the 'uploading' state. | | `PRECISE_CODE_INTEL_CLEANUP_TASK_INTERVAL` | `1m` | The frequency with which to run periodic codeintel cleanup tasks. | | `PRECISE_CODE_INTEL_COMMIT_RESOLVER_TASK_INTERVAL` | `10s` | The frequency with which to run the periodic commit resolver task. | | `PRECISE_CODE_INTEL_COMMIT_RESOLVER_MINIMUM_TIME_SINCE_LAST_CHECK` | `24h` | The minimum time the commit resolver will re-check an upload or index record. | | `PRECISE_CODE_INTEL_COMMIT_RESOLVER_BATCH_SIZE` | `100` | The maximum number of unique commits to resolve at a time. | | `PRECISE_CODE_INTEL_COMMIT_RESOLVER_MAXIMUM_COMMIT_LAG` | `0s` | The maximum acceptable delay between accepting an upload and its commit becoming resolvable. Be cautious about setting this to a large value, as uploads for unresolvable commits will be retried periodically during this interval. | | `PRECISE_CODE_INTEL_RETENTION_REPOSITORY_PROCESS_DELAY` | `24h` | The minimum frequency that the same repository's uploads can be considered for expiration. | | `PRECISE_CODE_INTEL_RETENTION_REPOSITORY_BATCH_SIZE` | `100` | The number of repositories to consider for expiration at a time. | | `PRECISE_CODE_INTEL_RETENTION_UPLOAD_PROCESS_DELAY` | `24h` | The minimum frequency that the same upload record can be considered for expiration. | | `PRECISE_CODE_INTEL_RETENTION_UPLOAD_BATCH_SIZE` | `100` | The number of uploads to consider for expiration at a time. | | `PRECISE_CODE_INTEL_RETENTION_POLICY_BATCH_SIZE` | `100` | The number of policies to consider for expiration at a time. | | `PRECISE_CODE_INTEL_RETENTION_COMMIT_BATCH_SIZE` | `100` | The number of commits to process per upload at a time. | | `PRECISE_CODE_INTEL_RETENTION_BRANCHES_CACHE_MAX_KEYS` | `10000` | The number of maximum keys used to cache the set of branches visible from a commit. | | `PRECISE_CODE_INTEL_CONFIGURATION_POLICY_MEMBERSHIP_BATCH_SIZE` | `100` | The maximum number of policy configurations to update repository membership for at a time. | | `PRECISE_CODE_INTEL_DOCUMENTATION_SEARCH_CURRENT_MINIMUM_TIME_SINCE_LAST_CHECK` | `24h` | The minimum time the documentation search current janitor will re-check records for a unique search key. | | `PRECISE_CODE_INTEL_DOCUMENTATION_SEARCH_CURRENT_BATCH_SIZE` | `100` | The maximum number of unique search keys to clean up at a time. | ## precise-code-intel-worker The following are variables are read from the `precise-code-intel-worker` service to control code graph data upload processing behavior. | **Name** | **Default** | **Description** | | ----------------------- | ------- | ------------------------------------------------------------------------------------------------------------------ | | `PRECISE_CODE_INTEL_WORKER_POLL_INTERVAL` | `1s` | Interval between queries to the upload queue. | | `PRECISE_CODE_INTEL_WORKER_CONCURRENCY` | `1` | The maximum number of indexes that can be processed concurrently. | | `PRECISE_CODE_INTEL_WORKER_BUDGET` | `0` | The amount of compressed input data (in bytes) a worker can process concurrently. Zero acts as an infinite budget. | The following settings should be the same for the [`frontend`](#frontend) service as well. | **Name** | **Default** | **Description** | | ----------------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `PRECISE_CODE_INTEL_UPLOAD_BACKEND` | `Blobstore` | The target file service for code graph data uploads. S3, GCS, and Blobstore are supported. In older versions of Sourcegraph (before v3.4.2) `Minio` was also a valid value. | | `PRECISE_CODE_INTEL_UPLOAD_MANAGE_BUCKET` | `false` | Whether or not the client should manage the target bucket configuration | | `PRECISE_CODE_INTEL_UPLOAD_BUCKET` | `lsif-uploads` | The name of the bucket to store LSIF uploads in | | `PRECISE_CODE_INTEL_UPLOAD_TTL` | `168h` | The maximum age of an upload before deletion |Learn and understand how auto-indexing works.
With Sourcegraph deployments supporting [executors](/admin/executors/), your repository contents can be automatically analyzed to produce a code graph index file. Once [auto-indexing is enabled](/code-search/code-navigation/auto_indexing#enable-auto-indexing) and [auto-indexing policies are configured](/code-search/code-navigation/auto_indexing#configure-auto-indexing), repositories will be periodically cloned into an executor sandbox, analyzed, and the resulting index file will be uploaded back to the Sourcegraph instance. Auto-indexing is currently available for Go, TypeScript, JavaScript, Python, Ruby and JVM repositories. See also [dependency navigation](/code-search/code-navigation/features#dependency-navigation) for instructions on how to setup cross-dependency navigation depending on what language ecosystem you use. ## Enable auto-indexing The following docs explains how to turn on [auto-indexing](/code-search/code-navigation/auto_indexing) on your Sourcegraph instance to enable [precise code navigation](/code-search/code-navigation/precise_code_navigation). ### Deploy executorsThis guide is meant to provide instructions for how to enable precise code navigation for any programming language.
The general steps for enabling precise code navigation are as follows: ## Install Sourcegraph CLI The [Sourcegraph CLI](/cli/) is used for uploading SCIP data to your Sourcegraph instance (replace `linux` with `darwin` for macOS): ```sh curl -L https://sourcegraph.com/.api/src-cli/src_linux_amd64 -o /usr/local/bin/src chmod +x /usr/local/bin/src ``` ## Install SCIP indexer An SCIP indexer is a command line tool that performs code analysis of source code and outputs a file with metadata containing all the definitions, references, and hover documentation in a project in SCIP. The SCIP file is then uploaded to Sourcegraph to power code navigation features. Install the [indexer](/code-search/code-navigation/writing_an_indexer) for the required programming language of your repository by following the instructions in the indexer's README ## Generate SCIP data To generate the SCIP data for your repository run the command in the _generate SCIP data_ step found in the README of the installed indexer. ## Upload SCIP data The upload step is the same for all languages. Make sure the current working directory is a path inside your repository, then use the Sourcegraph CLI to upload the SCIP file: ### To a private Sourcegraph instance (on prem) ```sh SRC_ENDPOINT=https://sourcegraph.mycompany.com src code-intel upload -file=index.scip ``` ### To cloud based Sourcegraph.com ```sh src code-intel upload -github-token=YourGitHubToken -file=dump.lsif ``` The `src-cli` upload command will try to infer the repository and git commit by invoking git commands on your local clone. If git is not installed, is older than version 2.7.0 or you are running on code outside of a git clone, you will need to also specify the `-repo` and `-commit` flags explicitly.This page describes how to create an index for JavaScript and TypeScript projects and uploading it to Sourcegraph.
We will use [`scip-typescript`](https://github.com/sourcegraph/scip-typescript) to create the index and the [Sourcegraph CLI](https://github.com/sourcegraph/src-cli) to upload it to Sourcegraph. ## Indexing in CI using scip-typescript directly In this approach, you will directly install `scip-typescript` and `src-cli` in CI. This is particularly useful if you are already using some other Docker image for your build. Here is an example using GitHub Actions to create and upload an index for a TypeScript project. ```yaml jobs: create-index-and-upload: # prevent forks of this repo from uploading lsif indexes if: github.repository == 'This page describes how you can automate data indexing in SCIP for Go codebases or index data manually.
## Automated indexing Sourcegraph provides the Docker images `sourcegraph/scip-go` and `sourcegraph/src-cli` so that you can easily automate indexing in your favorite CI framework. Note that the `scip-go` image bundles `src-cli`, so the second image may not be necessary. The following examples show you how to set up automated indexing in a few popular frameworks. You'll need to substitute the indexer and upload commands with what works for your project locally. If you implement automated indexing in a different framework, feel free to edit this page with instructions! ### GitHub Actions ```yaml on: - push jobs: scip-go: # this line will prevent forks of this repo from uploading lsif indexes if: github.repository == 'How Code Graph indexers analyze code and generate and index file.
[Code graph indexers](/code-search/code-navigation/writing_an_indexer) analyze source code and generate an index file, which is subsequently [uploaded to a Sourcegraph instance](/code-search/code-navigation/how-to/index_other_languages#upload-scip-data) using [Sourcegraph CLI](/cli/) for processing. Once processed, this data becomes available for [precise code navigation queries](/code-search/code-navigation/precise_code_navigation). ## Lifecycle of an upload Uploaded index files are processed asynchronously from a queue. Each upload has an attached `state` that can change as work associated with that data is performed. The following diagram shows the possible transition paths from one `state` of an upload to another.  The typical sequence for a successful upload is: `UPLOADING_INDEX`, `QUEUED_FOR_PROCESSING`, `PROCESSING`, and `COMPLETED`. In some cases, the processing of an index file may fail due to issues such as malformed input or transient network errors. When this happens, an upload enters the `PROCESSING_ERRORED` state. Such error uploads may undergo multiple retry attempts before moving into a permanent error state. At any point, an uploaded record may be deleted. This can happen due to various reasons, such as being replaced by a newer upload, due to the age of the upload record, or by explicit deletion initiated by the user. When deleting a record that could be used for code navigation queries, it transitions first into the `DELETING` state. This temporary state allows Sourcegraph to manage the set of Code Graph uploads smoothly. Changing the state of an upload to or from `COMPLETED` requires updating the [repository commit graph](#repository-commit-graph). This process can be computationally expensive for the worker service or Postgres database. ## Lifecycle of an upload (via UI) After successfully uploading an index file, the Sourcegraph CLI will provide a URL on the target instance to track the progress of that upload.Interact with Sourcegraph and Cody from the command line interface.
`src` is a command line interface to Sourcegraph that allows you to search code from your terminal, create and apply batch changes, and manage and administrate repositories, users, and more. [Cody CLI](/cody/clients/install-cli) is the same technology that powers the Cody IDE plugins but available from the command-line. Use Cody CLI for ad-hoc exploration in your terminal or as part of scripts to automate your workflows. ## Quick LinksThis guide explains how to use Comby to update Go import statements in a batch change.
This batch change example rewrites Go import paths for the `log15` package from `gopkg.in/inconshreveable/log15.v2` to `github.com/inconshreveable/log15` using [Comby](https://comby.dev/). It can handle single-package import statements like the following: ```go import "gopkg.in/inconshreveable/log15.v2" ``` Single-package imports with an alias: ```go import log15 "gopkg.in/inconshreveable/log15.v2" ``` And multi-package import statements with or without an alias: ```go import ( "io" "github.com/pkg/errors" "gopkg.in/inconshreveable/log15.v2" ) ``` ## PrerequisitesCreate a batch change to update Dockerfiles in each repository.
In this example, Batch Changes will allow you to update the base images used in your Dockerfiles across many repositories in just a few commands. First you will create [a batch spec](/batch-changes/create-a-batch-change) that: - Finds `Dockerfile` that make use of `google/dart:2.x` base images, and - Changes those `Dockerfiles` to use `google/dart:2.10` The batch spec and instructions here can [easily be adapted to update other base images](#updating-other-base-images).  ## PrerequisitesLearn in detail about how to update your Batch Changes.
Updating a batch change works by applying a batch spec to an **existing** batch change in the same namespace. Since batch changes are uniquely identified by their [`name`](/batch-changes/batch-spec-yaml-reference#name) and the namespace in which they were created, you can edit any other part of a batch spec and apply it again. When a new batch spec is applied to an existing batch change, the existing batch change is updated, and its changesets are updated to match the new desired state. ## Requirements To update a changeset, you need: - [Admin permissions for the batch change](/batch-changes/permissions-in-batch-changes#permission-levels-for-batch-changes) - Write access to the changeset's repository (on the code host) - A [personal access token](/batch-changes/configuring-credentials#personal-access-tokens) or a [global service account token](/batch-changes/configuring-credentials#global-service-account-tokens) configured for the code host For more information, see [Code host interactions in Batch Changes](/batch-changes/permissions-in-batch-changes#code-host-interactions-in-batch-changes). ## Preview and apply a new batch spec To update a batch change after previewing the changes, do the following: - Edit the [batch spec](/batch-changes/batch-spec-yaml-reference) with which you created the batch change to include the changes you want to make to the batch change. For example, change [the commit message in the `changesetTemplate`](/batch-changes/batch-spec-yaml-reference#changesettemplate-commit-message), or add a new changeset id [to the importedChangesets](/batch-changes/references/batch_spec_yaml_reference#importchangesets), or [modify the repositoriesMatchingQuery](/batch-changes/references/batch_spec_yaml_reference#on-repositoriesmatchingquery) to return different search results - Use the [Sourcegraph CLI (`src`)](https://github.com/sourcegraph/src-cli) to execute and preview the batch spec ```bash src batch preview -f YOUR_BATCH_SPEC.yaml ``` - Open the URL to preview the changes that will be made by applying the new batch spec - Click **Apply** to update the batch change. All changesets on your code host will be updated to the desired state shown in the preview. ## Apply a new batch spec directly To update a batch change directly, without preview, do the following: - Edit the [batch spec](/batch-changes/batch-spec-yaml-reference) with which you created the batch change to include the changes you want to make to the batch change - Use the [Sourcegraph CLI (`src`)](https://github.com/sourcegraph/src-cli) to execute, upload, and the batch specs ```bash src batch apply -f YOUR_BATCH_SPEC.yaml ``` The new batch spec will be applied directly, and the batch change and its changesets will be updated. ## How batch change updates are processed Changes in the batch spec that affect the batch change, such as the [`description`](/batch-changes/batch-spec-yaml-reference#description), are applied directly when you apply the new batch spec. Changes affecting the changesets are processed asynchronously to update the desired state. Different fields are processed differently. Here are some examples: - When the diff or attributes that affect the resulting commit of a changeset directly (such as the [`changesetTemplate.commit.message`](/batch-changes/batch-spec-yaml-reference##changesettemplatecommitmessage) or the [`changesetTemplate.commit.author`](/batch-changes/batch-spec-yaml-reference#changesettemplatecommitauthor)) and the changeset has been published, the commit on the code host will be overwritten by a new commit that includes the updated diff - When the [`changesetTemplate.title`](/batch-changes/batch-spec-yaml-reference#changesettemplatetitle) or the [`changesetTemplate.body`](/batch-changes/batch-spec-yaml-reference#changesettemplatecommitauthor) are changed and the changeset has been published, the changeset on the code host will be updated accordingly - When the [`changesetTemplate.branch`](/batch-changes/batch-spec-yaml-reference#changesettemplatetitle) is changed after the changeset has been published on the code host, the existing changeset will be closed on the code host and the new one, with the new branch, will be created - When the batch spec is changed in such a way that no diff is produced in a repository in which the batch change has already created and published a changeset, the existing changeset will be closed on the code host and archived in the batch change - When the changeset has been published and the batch spec is changed in such a way that a commit on the code host will be overwritten, any commits that have been manually added to the changeset on the code host will be deletedThis docs helps you to troubleshoot and eliminate problems when trying to execute a batch spec with `src batch [apply|preview]` or managing an already created batch change and its changesets.
## Executing batch change steps Since `src batch [apply|preview]` executes a batch spec on the host machine on which it is executed (and not on the Sourcegraph instance), there are a lot of different possibilities that can cause it to fail, from missing dependencies to missing credentials when trying to connect to the Sourcegraph instance. The following questions can be used to determine what's causing the problem.Learn how to track your existing changesets.
Batch Changes allow you not only to [publish changesets](/batch-changes/publishing-changesets) but also to **import and track changesets** that already exist on different code hosts. That allows you to get an overview of the status of multiple changesets, with the ability to filter and drill down into the details of a specific changeset.  ## Requirements - Sourcegraph instance with repositories in it. See the [Quickstart](/#quick-install) guide on how to set up a Sourcegraph instance - A [global service account token](/batch-changes/configuring-credentials#global-service-account-tokens) for Batch Changes (**a personal access token cannot currently be used for importing changesets**) ## Importing changesets into a batch change To track existing changesets in a batch change, you add them to the [batch spec](/batch-changes/create-a-batch-change#writing-a-batch-spec) under the `importChangesets` property and apply the batch spec. The following example batch spec tracks multiple existing changesets in different repositories on different code hosts: ```yaml name: track-important-milestone description: Track all changesets related to our important milestone importChangesets: - repository: github.com/sourcegraph/sourcegraph externalIDs: [15397, 15590, 15597, 15583, 15806, 15798] - repository: github.com/sourcegraph/src-cli externalIDs: [378, 373, 374, 369, 368, 361, 380] - repository: bitbucket.sgdev.org/SOUR/vegeta externalIDs: [8] - repository: gitlab.sgdev.org/sourcegraph/src-cli externalIDs: [113, 119] ``` You can learn more about how to create a batch spec from our [creating a batch change](/batch-changes/create-a-batch-change) docs.Learn what configuration permissions site admins have with Batch Changes.
Using Batch Changes requires a [code host connection](/admin/external_services/) to a supported code host (currently GitHub, Bitbucket Server/Bitbucket Data Center, GitLab, and Bitbucket Cloud). Site admins can configure the following when setting and disabling Batch Changes: ## Set up Batch Changes - [Configure which users have access to Batch Changes](/admin/access_control/batch_changes) (Beta): By default, all users can create and view batch changes, but only the batch change's author or a site admin can administer a given batch change - Additionally, you can also [customize org settings](/admin/config/batch_changes#enable-organization-members-to-administer) to allow members of an organization to share administration privileges over batch changes created in that organization - (Optional) [Configure repository permissions](/admin/permissions/), that Batch Changes will follow - [Configure credentials](/batch-changes/configuring-credentials) - [Setup incoming webhooks](/admin/config/webhooks/incoming) to make sure changesets sync fast. Learn more about [Batch Changes effect on code host rate limits](/batch-changes/requirements#batch-changes-effect-on-code-host-rate-limits) - Configure any desired optional features, such as: - [Rollout windows](/admin/config/batch_changes#rollout-windows), which control the rate at which Batch Changes will publish changesets on code hosts - [Forks](/admin/config/batch_changes#forks), which push branches created by Batch Changes onto forks of the upstream repository instead of the repository itself - [Outgoing webhooks](/admin/config/webhooks/outgoing), which publish events related to batch changes and changesets to enable deeper integrations with your other tools and systems - [Auto-delete branch on merge/close](/admin/config/batch_changes#automatically-delete-branches-on-merge-close), which automatically deletes branches created by Batch Changes when changesets are merged or closed - [Commit signing for GitHub](/admin/config/batch_changes#commit-signing-for-github), which signs commits created by Batch Changes via a GitHub App (Beta) - [Batch spec library](/admin/config/batch_changes#batch-spec-library), which help your users write batch specs and follow best practices ## Disable Batch Changes - [Disable Batch Changes](/batch-changes/permissions-in-batch-changes#disabling-batch-changes). - [Disable Batch Changes for non-site-admin users](/batch-changes/permissions-in-batch-changes#disabling-batch-changes-for-non-site-admin-users)Learn how to run Batch Changes server-side using executors and file mounts.
Create a batch change that changes specific words in every repository.
This example shows you how to create [a batch spec](/batch-changes/create-a-batch-change) that replaces the words `whitelist` and `blacklist` with `allowlist` and `denylist` in every Markdown file across your entire code base. The batch spec can be easily changed to search and replace other terms in other file types.  ## PrerequisitesThis document describes the requirements for running Batch Changes.
Batch Changes has specific requirements for running on the Sourcegraph server version, its connected code hosts, and developer environments. ## Sourcegraph Server While the latest version of the Sourcegraph server is always recommended, **version 3.22** is the minimum version required to run Batch Changes. ## Code hosts Batch Changes is compatible with the following code hosts: * Github.com * GitHub Enterprise 2.20 and later * GitLab 12.7 and later (burndown charts are only supported with 13.2 and later) * Bitbucket Server 5.7 and later, Bitbucket Data Center 7.6 and later * Bitbucket Cloud (bitbucket.org) * Azure DevOps Services * Gerrit 3.1.7 and later * Perforce (Beta) For Sourcegraph to interface with these, admins and users must first [configure credentials](/batch-changes/configuring-credentials) for each relevant code host.This guide explains how to use Comby to refactor Go code in a batch change.
To refactor Go code using Comby in a batch change, use Sourcegraph's [structural search](/code-search/types/structural) and [Comby](https://comby.dev/) to rewrite Go statements. From: ```go fmt.Sprintf("%d", number) ``` To: ```go strconv.Itoa(number) ``` The statements are semantically equivalent, but the latter is more precise. Since the replacements could require importing the `strconv` package, it uses [`goimports`](https://godoc.org/golang.org/x/tools/cmd/goimports) to update the list of imported packages in all `*.go` files. ## PrerequisitesLearn what happens when you need to re-execute batch specs multiple times via the Sourcegraph CLI `src` command.
## Idempotency as goal One goal behind the [design of Batch Changes](/batch-changes/design) and the `src batch [apply|preview]` commands in the [Sourcegraph CLI](/cli/) is **idempotency**. Re-applying the same batch spec produces the same batch change and changesets. In practical terms, a user should be able to run `src batch apply -f my.batch.yaml` multiple times, and if `my.batch.yaml` didn't change, the batch change shouldn't change. ## Exceptions That can only work if the inputs to the batch spec stay the same, too. Since a batch spec's [`on.repositoriesMatchingQuery`](/batch-changes/batch-spec-yaml-reference#on-repositoriesmatchingquery) uses Sourcegraph's search functionality to **dynamically** produce a list of repositories in which to execute the [`steps`](/batch-changes/batch-spec-yaml-reference#steps), the list of repositories might change between executions of `src batch apply`. If that's the case, the `steps` need to be re-executed in any newly found repositories. Changesets in repositories that are not found anymore will be closed and archived in the batch change (see [Updating a batch change](/batch-changes/update-a-batch-change) for more details). In unchanged repositories, [Sourcegraph CLI](/cli/) tries to use cached results to avoid re-executing the `steps`. ## Local caching Whenever [Sourcegraph CLI](/cli/) re-executes the same batch spec, it checks a **local cache** to see if it already executed the same [`steps`](/batch-changes/batch-spec-yaml-reference#steps) in a given repository. Whether a cached result can be used is dependent on multiple things: - The repository's default branch's revision didn't change (because if new commits have been pushed to the repository, re-executing the `steps` might lead to different results) - The `steps` themselves didn't change, including all their inputs, such as [`steps.env`](/batch-changes/batch-spec-yaml-reference#environment-array), and the `steps.run` field (which can change between executions if it uses [templating](/batch-changes/batch-spec-templating) and is dynamically built from search results) That also means that [Sourcegraph CLI](/cli/) can use cached results when re-executing a **changed batch spec**, as long as the changes didn't affect the `steps` and the results they produce. For example, if only the [`changesetTemplate.title`](/batch-changes/batch-spec-yaml-reference#changesettemplate-title) field has been changed, cached results can be used, since that field doesn't have any influence on the `steps` and their results.Learn how to rebase your existing changesets.
In this quickstart guide, you'll learn how to get started and create your first Batch Change in a few minutes. Here, you'll create a Sourcegraph batch change that appends text to `README` files in all of your repositories.
## Requirements - A Sourcegraph instance with some repositories in it - A local environment matching [requirements](/batch-changes/requirements) to create batch changes with the Sourcegraph CLI. ## Install the Sourcegraph CLI To create Batch Changes, first [install the Sourcegraph CLI](/cli/) (`src`). Next, install the version of `src` that is compatible with your Sourcegraph instance. To do so, run the following commands in your terminal: ### For macOS ```bash curl -L https://Learn how to publish changesets to the code host.
After you've [created a batch change](/batch-changes/create-a-batch-change) with the `published` field set to `false` or omitted in its batch spec, you can see a preview of the changesets (e.g., GitHub pull requests) that will be created on the code host once they're published:  To create these changesets on the code hosts, you need to publish them. ## Requirements To publish a changeset, you need: - [Admin permissions for the batch change](/batch-changes/permissions-in-batch-changes#permission-levels-for-batch-changes) - Write access to the changeset's repository (on the code host) - A [personal access token](/batch-changes/configuring-credentials#personal-access-tokens) or a [global service account token](/batch-changes/configuring-credentials#global-service-account-tokens) configured for the code hostWorkflow | Pros | Cons |
---|---|---|
Setting
published in the batch spec
|
|
|
Publishing from the UI
|
|
|
Learn how to control access permissions levels among your team members.
You can customize access to a batch change and propose changes to repositories with varying permission levels. Other users can view the batch change's proposed changes to a repository if they can view it; otherwise, they can see only limited, non-identifying information about the change. ## Permission levels for Batch Changes The permission levels for a batch change are: - **Read:** For people who need to view the batch change - **Admin:** For people who need full access to the batch change, including editing, closing, and deleting it To see the batch change's proposed changes on a repository, a user requires **read** access. Read or admin access to the batch change does not entitle a user to view all of the batch change's changesets.This section is about Batch Changes, which helps you automate and ship large-scale code changes across many repositories and code hosts.
Batch Changes helps you ship large-scale code changes across many repositories and code hosts. You can create pull requests on all affected repositories, and it tracks their progress until they're all merged. You can also preview the changes and update them at any time. ## Get StartedThis document helps you debug and troubleshoot the writing and execution of Batch Specs with the [Sourcegraph CLI `src`](/cli/) command.
Here, you will learn what happens when a user applies or previews a Batch Spec by running `src batch apply` or `src batch preview` commands. ## Overview `src batch apply` and `src batch preview` execute a batch spec the same way by following these steps: - [How src executes a Batch Spec](#how-src-executes-a-batch-spec) - [Overview](#overview) - [Parse the batch spec](#parse-the-batch-spec) - [Resolving namespace](#resolving-namespace) - [Preparing container images](#preparing-container-images) - [Resolving repositories](#resolving-repositories) - [Executing steps](#executing-steps) - [Download archive and prepare](#download-archive-and-prepare) - [Run the steps](#run-the-steps) - [Create final diff](#create-final-diff) - [Saving a changeset spec](#saving-a-changeset-spec) - [Importing changesets](#importing-changesets) - [Sending changeset specs](#sending-changeset-specs) - [Sending the batch spec](#sending-the-batch-spec) - [Preview or apply the batch spec](#preview-or-apply-the-batch-spec) The only difference is the last step, i.e., Preview or apply the batch spec. Here, the `src batch apply` command applies the batch spec, and the `src batch preview` prints a URL that gives you a preview of what would change if you applied the batch spec. Let's learn about each step in more detail. ## Parse the batch spec `src` reads in, parses, and validates the batch spec YAML specified with the `-f` flag. It validates the batch spec against its [schema](https://github.com/sourcegraph/src-cli/blob/main/schema/batch_spec.schema.json). It then performs some semantic checks to make sure that, for example, `changesetTemplate` is specified if `steps` are specified or that no feature is used that's not supported by the Sourcegraph instance. ## Resolving namespace `src` resolves the given namespace to apply/preview the batch spec by sending a GraphQL request to the Sourcegraph instance to fetch the ID for the given namespace name. If no namespace is specified with `-namespace` (or `-n`) then the currently authenticated user is used as the namespace. Learn more about how to [Connect to Sourcegraph](/cli/quickstart#connect-to-sourcegraph) in the CLI docs for details on how to authenticate. ## Preparing container images If the batch spec contains `steps`, then for each step, `src` checks its `container` image to see whether it's locally available. To do so, it runs `docker image inspect --format {{.Id}} --Learn how to fix errors when you run a changeset.
Publishing a changeset can result in an error, and there can be different reasons. Sometimes, the error can be fixed by automatically retrying to publish the changeset, but other errors require the user to take some action. Errored changesets that are marked as **Retrying** are automatically retried:  Changesets that are marked as **Failed** can be [retried manually](#manual-retrying-of-errored-changesets):  ## Types of errors Two types of errors can occur when running a changeset: - [Automatic retrying](#automatic-retrying-of-errored-changesets) - [Manual retrying](#manual-retrying-of-errored-changesets) ## Automatic retrying of errored changesets If an operation on a changeset results in an error that looks like it could be transient or resolvable if retried, Sourcegraph will automatically retry that operation. Only internal errors and errors from the code host with HTTP status codes in the `500` range will typically be retried. This will be indicated by the changeset entering a **Retrying** state. Sourcegraph will automatically retry the operation up to ten times. Examples of errors that can be fixed by [automatically retrying](#automatic-retrying-of-errored-changesets) are as follows: - Connecting to the code host failed - Code host responds with an error when trying to open a pull request - Internal network errors ## Manual retrying of errored changesets Changesets that are marked as **Failed** won't be retried automatically. That's either because the number of automatic retries has been exhausted or because retrying won't fix the error without user intervention. When a changeset fails to publish, the user can click **Retry** on the error message. No re-applying is needed. Additionally, to retry all **Failed** (or even **Retrying**) changesets manually, you can re-apply the batch spec via the following two ways: 1. Preview and re-apply the batch spec in the UI by running the following command and then click the printed URL to apply the uploaded batch spec ```bash src batch preview -f YOUR_BATCH_SPEC.batch.yaml ``` 2. Re-apply directly by running the following command. ```bash src batch apply -f YOUR_BATCH_SPEC.batch.yaml ```Find answers to the most common questions about Batch Changes.
## What are the requirements for running Batch Changes? Batch Changes has specific requirements for running on the Sourcegraph server version, its connected code hosts, and developer environments. Read [requirements documentation](/batch-changes/requirements) for more details. ## What happens when a user is deleted? When a user is deleted, their Batch Changes become inaccessible in the UI, but the data is not permanently deleted. This allows the batch changes to be recovered if the user is restored. You can change the ownership of a batch change before or after soft deletion by using the instructions[here](https://help.sourcegraph.com/hc/en-us/articles/28471221973133-Changing-Ownership-Of-A-Batch-Change-Before-User-Deletion). However, if the user deletion is permanent (using the "Delete forever" option), deleting both account and data, then the associated Batch Changes are also permanently deleted from the database. This frees storage space and removes dangling references. ## Are there any limitations with the Batch Changes feature? - **Code hosts**: Batch Changes currently support **GitHub**, **GitLab** and **Bitbucket Server and Bitbucket Data Center** repositories. If you want to use Batch Changes on other code hosts, [let us know](https://about.sourcegraph.com/contact). - **Server-side execution**: Batch change steps are run locally (in the [Sourcegraph CLI](https://github.com/sourcegraph/src-cli)) or [server-side](/batch-changes/server-side) `Beta`. For this reason, the APIs for creating and updating a batch change require you to upload all of the changeset specs (which are produced by executing the batch spec locally). Also, see [how scalable Batch Changes are](/batch-changes/faq#how-scalable-are-batch-changes-how-many-changesets-can-i-create). - **Multi-user access**: It is yet to be possible for multiple users to edit the same batch change that was created under an organization. - It is yet to be possible to reuse a branch in a repository across multiple batch changes. - The only type of user credential supported by Sourcegraph right now is a [personal access token](/batch-changes/configuring-credentials), either per user, or via a global service account. Further credential types may be supported in the future. ## What happens if my batch change creation breaks down at 900 changesets out of 1,000? Do I have to re-run it? The default behavior of Batch Changes is to stop creating the diff on repo errors. You can ignore errors by adding the [`-skip-errors`](/cli/references/batch/preview) flag to the `src batch preview` command. ## Can I close a batch change and leave the changesets open? Yes. A confirmation page shows you all the actions that will occur on the various changesets in the batch change after you close it. Open changesets will be marked 'Kept open', meaning batch change won't alter them. See [closing a batch change](/batch-changes/delete-a-batch-change#close-a-batch-change). ## How scalable are Batch Changes? How many changesets can I create? Batch Changes can create tens of thousands of changesets. This is something we run testing on internally. Some known limitations include: - Since diffs are created locally by running a docker container, performance depends on the capacity of your machine. See [How `src` executes a batch spec](/batch-changes/how-src-executes-a-batch-spec) - Batch Changes create changesets in parallel locally. You can set up the maximum number of parallel jobs with [`-j`](/cli/references/batch/apply) - Manipulating (commenting, notifying users, etc.) changesets at that scale can be clumsy. ## How long does it take to create a batch change? There's a rule of thumb: - Measure the time it takes to run your change container on a typical repository - Multiply by the number of repositories - Divide by the number of changeset creation jobs running in parallel set by the [`-j`](/cli/references/batch/apply) CLI flag. It defaults to GOMAXPROCS, [roughly](https://golang.org/pkg/runtime/#NumCPU) the number of available coresFind examples of common use cases for Batch Changes.
- [Refactoring Go code using Comby](/batch-changes/refactor-go-comby) - [Updating Go import statements using Comby](/batch-changes/updating-go-import-statements) - [Update base images in Dockerfiles](/batch-changes/update-base-images-in-dockerfiles) - [Search and replace specific terms](/batch-changes/search-and-replace-specific-terms) Take a look at our [examples repository](https://github.com/sourcegraph/batch-change-examples) for a collection of ready-to-be-executed batch specs.Learn in detail about how to close, delete, and opt out of a Batch Change.
You can close a batch change when you no longer need it, when all changes have been merged, or when you decide not to make changes. A closed batch change still appears in the [batch changes list](/batch-changes/create-a-batch-change#viewing-batch-changes). To completely remove it, you can delete the batch change. Any user with [admin access to the batch change](/batch-changes/permissions-in-batch-changes#permission-levels-for-batch-changes) can close or delete it. ## Close a batch change To close the batch change, do the following: - Click the **Batch Changes** icon in the top navigation bar  - From this list, click the batch change that you'd like to close or delete - In the top right, click the **Close** button  - Select whether you want to close all of the batch change's open changesets (e.g., closing all associated GitHub pull requests on the code host)  - Click **Close batch change** - Once a batch change is closed, it can't be updated or reopened anymore ## Delete a batch change To delete a batch change, follow these steps: - First, close the batch change - Instead of a **Close batch change** button you'll now see a **Delete** button  - Click **Delete**. The batch change has been deleted from the Sourcegraph instance. The changesets it created (and possibly closed) will still exist on the code hosts since most code hosts don't support deleting changesets ## Opt out of batch changes Repository owners that are not interested in batch change changesets can opt-out so that their repository will be skipped when a batch spec is executed. To opt-out: - Create a file called `.batchignore` at the root of the repository you wish to be skipped - Now, `src batch [apply|preview]` will skip that repository if it's yielded by the `on` part of the batch specLearn how to create multiple changesets in large-sized repos.
Learn how to create changesets per project in monorepos.
Large repositories often contain multiple projects, hence named as **Monorepos**. It can make sense to run the batch spec [`steps`](/batch-changes/batch-spec-yaml-reference#steps) separately in each project and create one changeset per project. This can be done by using [`workspaces`](/batch-changes/batch-spec-yaml-reference#workspaces) in the batch specs via two steps: 1. Define the project locations with the `workspaces` property 2. Produce unique `changesetTemplate.branch` names ## Define project locations with `workspaces` Let's say you have a repository containing multiple TypeScript projects in which you want to update TypeScript by running the following command: ```shell npm update typescript ``` The repository has the following directory and file structure: ```bash README project1/package.json project1/src/... project2/package.json project2/src/... examples/project3/package.json examples/project3/src/... ``` The location of the `package.json` files tells us that the TypeScript projects are in `project1`, `project2`, and `examples/project3`. You want to run the `npm update` command in each of these and produce an individual changeset per project. The [`workspaces`](/batch-changes/batch-spec-yaml-reference#workspaces) property in batch specs allows you to do that: ```yaml name: update-typescript-monorepo description: This batch change updates the TypeScript dependency to the latest version on: - repositoriesMatchingQuery: our-large-monorepo workspaces: - rootAtLocationOf: package.json in: github.com/our-org/our-large-monorepo steps: - run: npm update typescript container: node:14 # [...] ``` The `workspaces` property here defines that in `github.com/your-org/your-large-monorepo`, different `workspaces` exist and contain a `package.json` at their root. When executed with `src batch [apply|preview]`, this would produce up to 3 changesets in `github.com/your-org/your-large-monorepo`, one for each project. ## Produce unique `changesetTemplate.branch` names Since changesets are uniquely identified by their repository and branch, you **must** ensure that multiple changesets in the same repository will have different branches. To do that, we make use of [templating](/batch-changes/batch-spec-templating) in the [`changesetTemplate.branch`](/batch-changes/batch-spec-templating#changesettemplate-context) field ```yaml # [...] changesetTemplate: title: Update TypeScript body: This updates TypeScript to the latest version published: false commit: message: Update TypeScript # Templating and helper functions allow us to get the `path` in which # the `steps` executed and turn that into a branch name: branch: batch-changes/update-typescript-${{ replace steps.path "/" "-" }} ``` The `steps.path` [templating variable](/batch-changes/batch-spec-templating) contains the path in which the `steps` were executed relative to the root of the repository. With the file and directory structure above, that means you'd end up with the following branch names: - `batch-changes/update-typescript-project1` - `batch-changes/update-typescript-project2` - `batch-changes/update-typescript-examples-project3` And with that, you're done and ready to produce multiple changesets in a single repository, with the full batch spec looking like this: ```yaml name: update-typescript-monorepo description: This batch change updates the TypeScript dependency to the latest version on: - repository: github.com/sourcegraph/automation-testing workspaces: - rootAtLocationOf: package.json in: github.com/sourcegraph/automation-testing steps: - run: npm update typescript container: node:14 changesetTemplate: title: Update TypeScript body: This updates TypeScript to the latest version branch: batch-changes/update-typescript-${{ replace steps.path "/" "-" }} commit: message: Update TypeScript published: false ``` You only need to run `src batch [apply|preview]` to execute your batch spec. ## Dynamic discovery of workspaces The `workspace` property leverages Sourcegraph search to find the location of the defined workspaces in the repositories yielded by the [`on`](/batch-changes/batch-spec-yaml-reference#on) property of the batch spec. That has the advantage that it's dynamic: whenever `src batch [apply|preview]` is re-executed, Sourcegraph search is used again to find workspaces, automatically picking up new ones and removing workspaces that no longer exist. ## Only downloading workspace data in large repositories If the repository containing the workspaces is really large and it's not feasible to download to make it available for the `steps` execution, the [`workspaces.onlyFetchWorkspaces`](/batch-changes/batch-spec-yaml-reference#workspacesonlyfetchworkspace) field can be set to `true` to only download the workspaces, without the rest of the repository.Learn in detail about how to create, view, and filter your Batch Changes.
Batch Changes are created by writing a [batch spec](/batch-changes/batch-spec-yaml-reference) and executing that batch spec with the [Sourcegraph CLI](https://github.com/sourcegraph/src-cli) `src`. Batch Changes can also be used on [multiple projects within a monorepo](/batch-changes/creating-changesets-per-project-in-monorepos) by using the `workspaces` key in your batch spec. There are two ways of creating a batch change: 1. On your local machine, with the [Sourcegraph CLI](#create-a-batch-change-with-the-sourcegraph-cli) 2. Remotely, with [server-side execution](/batch-changes/server-side) ## Create a batch change with the Sourcegraph CLI This part of the guide will walk you through creating a batch change on your local machine with the Sourcegraph CLI. ### Requirements - Sourcegraph instance with repositories in it. See the [Quickstart](/batch-changes/quickstart) guide on how to set up a Sourcegraph instance - Installed and configured [Sourcegraph CLI](https://github.com/sourcegraph/src-cli). Read about the detailed process in the [quickstart guide](/batch-changes/quickstart#install-the-sourcegraph-cli) - Configure the credentials for the code host(s) on which you'll create changesets. Read more about [Configuring user credentials](/batch-changes/configuring-credentials) docs on how to add and manage credentials ### Writing a batch spec To create a batch change, you need a **batch spec** describing the change. Here is an example batch spec that describes a batch change to add **Hello World** to all `README` files: ```yaml version: 2 name: hello-world description: Add Hello World to READMEs # Find all repositories that contain a README file. on: - repositoriesMatchingQuery: file:README # In each repository, run this command. Each repository's resulting diff is captured. steps: - run: IFS=$'\n'; echo Hello World | tee -a $(find -name README) container: alpine:3 # Describe the changeset (e.g., GitHub pull request) you want for each repository. changesetTemplate: title: Hello World body: My first batch change! branch: hello-world # Push the commit to this branch. commit: message: Append Hello World to all README files published: false # Do not publish any changes to the code hosts yet ``` The commits created from your spec will use the `git config` values for `user.name` and `user.email` from your local environment or `batch-changes@sourcegraph.com` if no user is set. Alternatively, you can also [specify an `author`](/batch-changes/batch-spec-yaml-reference#changesettemplate-commit-author) in this spec.Learn how to configure access tokens for code hosts when creating changesets.
Interacting with a code host (such as creating, updating, or syncing changesets) is made possible by configuring credentials for that code host. Sourcegraph uses these credentials to manage changesets on your behalf and with your specific permissions. ## Requirements - Sourcegraph instance with repositories in it. Read more in the [Quickstart](/batch-changes/quickstart) guide on how to set up a Sourcegraph instance. - Account on the code host with access to the repositories you wish to target with your batch changes. ## Types of credentials used by Batch Changes Batch Changes can use access tokens for all code hosts, and [GitHub apps (experimental)](#github-apps) for GitHub code hosts. Two types of credentials can be configured to use with Batch Changes: 1. **User Credential** - A credential set by an individual Batch Changes user for their personal code host user account. 2. **Global Service Credential** (Configurable by admins only) - A credential that can be used by any Batch Changes user who does not have a personal credential configured. These are also required for [importing changesets](/batch-changes/tracking-existing-changesets) and syncing changeset state from the code host when webhooks are not configured. Although currently importing changesets is not supported without a global credential, if you’re concerned about individual user permissions, using a global credential is not recommended. Different credentials are used for different types of operations, as shown in the table below. - 🟢 **Preferred**: Sourcegraph will prefer to use this credential for this operation if it is configured. - 🟡 **Fallback**: Sourcegraph will fall back to use this credential for this operation if it is configured. - 🔴 **Unsupported**: Sourcegraph cannot use this credential for this operation. | **Operation** | **User Credential** | **Global Service Credential** | | ---------------------------------------------------------------------------- | :----------------------------------------------: | :------------------------------------------------------------: | | Pushing a branch with the changes | 🟢 | 🟡 | | Publishing a changeset | 🟢 | 🟡 | | Updating a changeset | 🟢 | 🟡 | | Closing a changeset | 🟢 | 🟡 | | Importing a changeset | 🔴 | 🟢 | | Syncing a changeset | 🔴 | 🟢 | When creating a changeset on a code host, the author of the changeset will reflect the credential used (e.g., on GitHub, the user will be the pull request author). This is why a user credential is preferred for most operations. ## Personal access tokens Personal access tokens are not strictly required if a global access token has also been configured, but users should add one if they want Sourcegraph to create changesets under their name.These docs explain how to perform bulk operations on changesets.
Bulk operations allow a single action to be performed across many changesets in a batch change. ## Selecting changesets for a bulk operation To perform a bulk operation on changesets, - Click the checkbox next to a changeset in the list view. You can select all changesets you have permission to view  - If you like, select all changesets in the list by using the checkbox in the list header. To select **all** changesets that meet the filters and search currently set, click the **(Select XX changesets)** link in the header toolbar  - Next, from the top right, select the action to perform on all the changesets  - Once changesets are selected, a query is made to determine the bulk operations that can be applied to the selected changesets ## Supported types of bulk operations Depending on the changesets selected, different types of bulk operations can be applied to the selected changesets. For a bulk operation to be available, it has to be applicable to all the selected changesets. Below is a list of supported bulk operations for changesets and the conditions with which they're applicable: | **Types** | **Description** | | ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Commenting** | Post a comment on all selected changesets. Useful for pinging people, reminding them to take a look at the changeset, or posting your favorite emoji | | **Detach** | Detach a selection of changesets from the batch change to remove them from the archived tab | | **Re-enqueue** | Re-enqueues the pending changes for all selected changesets that failed | | **Merge (experimental)** | Merge the selected changesets on code hosts. Some changesets may be unmergeable due to their states, which does not impact the overall bulk operation. Failed merges are listed under the bulk operations tab. In the confirmation modal, you can opt for a squash merge strategy, available on GitHub, GitLab, and Bitbucket Cloud. For Bitbucket Server/Data Center, only regular merges are performed | | **Close** | Close the selected changesets on the code hosts | | **Publish** | Publishes the selected changesets, provided they don't have a [`published` field](/batch-changes/batch-spec-yaml-reference#changesettemplate-published) in the batch spec. You can choose between draft and normal changesets in the confirmation modal | | **Export** | Export selected changesets that you can use for later use | ## Monitoring bulk operations On the **Bulk operations** tab, you can view all bulk operations that have been run over the batch change. Some bulk operations can involve quite some operations to perform, you can track the progress, and see what operations have been performed in the past. Sourcegraph Batch Changes use [batch specs](/batch-changes/create-a-batch-change#writing-a-batch-spec) to define batch changes. This page is a reference guide to the batch spec YAML format in which batch specs are defined.
## `version` The `version` of the batch spec. Defaults to 1 if not specified. New batch specs should use version 2. We recommend to always specify the version. ### Version 1 The schema version before Sourcegraph version 5.5. For now, if no version is specified, batch changes will use this schema version. It is recommended to switch to version 2. ### Version 2 (recommended) Introduced in Sourcegraph version 5.5. Queries defined under [`on.repositoriesMatchingQuery`](#onrepositoriesmatchingquery) default to keyword search instead of standard search. Authors can override the default by specifying the pattern type explicitly. Refer to the [search syntax docs](/code-search/queries) for more information about pattern types. ### Example ```yaml version: 2 ``` ## `name` The `name` of the batch change, which is unique among all batch changes in the namespace. A batch change's name is case-preserving. ### Examples ```yaml name: update-go-import-statements ``` ```yaml name: update-node.js ``` ## `description` The `description` of the batch change. It's rendered as Markdown. ### Examples ```yaml description: This batch change changes all `fmt.Sprintf` calls to `strconv.Iota`. ``` ```yaml description: | This batch change changes all imports from `gopkg.in/sourcegraph/sourcegraph-in-x86-asm` to `github.com/sourcegraph/sourcegraph-in-x86-asm` ``` ## `on` The set of repositories (and branches) on which the batch change would run. It's specified as a list of search queries (that match repositories) and specific repositories. ### Examples ```yaml on: - repositoriesMatchingQuery: lang:go fmt.Sprintf\("%d", \w+\) patterntype:regexp - repository: github.com/sourcegraph/sourcegraph ``` ## `on.repositoriesMatchingQuery` A Sourcegraph search query that matches a set of repositories (and branches). Each matched repository branch is added to the list of repositories on which the batch change will run. Your search query should answer the question, "Where do I want to run this batch change?". Search result matches for things like commits, symbols, or file owners will be ignored. It's good practice to explicitly specify the pattern type. If you don't specify a pattern type, the default is determined by the [batch spec version](#version).Understand how to use templating to make your batch changes more powerful.
[Certain fields](#fields-with-template-support) in a [batch spec YAML](/batch-changes/batch-spec-yaml-reference) support templating to create even more powerful and performant batch changes. Templating in a batch spec uses the delimiters `${{` and `}}`. Inside the delimiters, [template variables](#template-variables) and [template helper functions](#template-helpers-functions) may be used to produce a text value. ## Example batch spec with templating Here is an excerpt of a batch spec that uses templating: ```yaml on: - repositoriesMatchingQuery: lang:go fmt.Sprintf("%d", :[v]) patterntype:structural -file:vendor steps: - run: comby -in-place 'fmt.Sprintf("%d", :[v])' 'strconv.Itoa(:[v])' ${{ join repository.search_result_paths " " }} # ^ templating starts here container: comby/comby - run: goimports -w ${{ join previous_step.modified_files " " }} # ^ templating starts here container: unibeautify/goimports ``` Before executing the first `run` command, `repository.search_result_paths` will be replaced with the relative-to-root-dir file paths of each search result yielded by `repositoriesMatchingQuery`. Using the [template helper function](#template-helper-functions) `join`, an argument list of whitespace-separated values is constructed. The final `run` value that will be executed will look similar to this: ```yaml run: comby -in-place 'fmt.Sprintf("%d", :[v])' 'strconv.Itoa(:[v])' cmd/src/main.go internal/fmt/fmt.go ``` The result is that `comby` only searches and replaces in those files instead of having to search through the complete repository. Before the second step is executed, `previous_step.modified_files` will be replaced with the list of files that the previous `comby` step modified. It will look similar to this: ```yaml run: goimports -w cmd/src/main.go internal/fmt/fmt.go ```This cheatsheet is a quick reference for common batch change use cases.
Some common patterns are reused every time when writing [batch specs](/batch-changes/batch-spec-yaml-reference). This documentation lists these patterns to make it easy for you to copy and reuse them. You can also refer to the [batch change examples repo](https://github.com/sourcegraph/batch-change-examples) for more consolidated examples of batch specs. It's also recommended to see [batch spec templating](/batch-changes/batch-spec-templating) since most of these examples use templating. ## Loop over search result paths in shell script ```yaml on: - repositoriesMatchingQuery: OLD-VALUE steps: - run: | IFS=$'\n' files="${{ join repository.search_result_paths "\n" }}" for file in $files; do sed -i 's/OLD-VALUE/NEW-VALUE/g;' "${file}" done container: alpine:3 ``` ## Put search result paths in file and loop over them ```yaml on: - repositoriesMatchingQuery: OLD-VALUE steps: - run: | while IFS= read -r file || [ -n "$file" ] do sed -i 's/OLD-VALUE/NEW-VALUE/g;' "${file}" done < /tmp/search-results container: alpine:3 files: /tmp/search-results: ${{ join repository.search_result_paths "\n" }} ``` ## Use search result paths as arguments for single command ```yaml on: - repositoriesMatchingQuery: lang:go fmt.Sprintf("%d", :[v]) patterntype:structural -file:vendor count:10 steps: - run: comby -in-place 'fmt.Sprintf("%d", :[v])' 'strconv.Itoa(:[v])' ${{ join repository.search_result_paths " " }} container: comby/comby ``` ## Format files modified by previous step ```yaml steps: - run: comby -in-place 'fmt.Sprintf("%d", :[v])' 'strconv.Itoa(:[v])' ${{ join repository.search_result_paths " " }} container: comby/comby - run: goimports -w ${{ join previous_step.modified_files " " }} container: unibeautify/goimports ``` ## Dynamically set branch name based on workspace ```yaml workspaces: - rootAtLocationOf: package.json in: github.com/sourcegraph/* steps: # [... other steps ... ] - run: if [[ -f "package.json" ]]; then cat package.json | jq -j .name; fi container: jiapantw/jq-alpine:latest outputs: projectName: value: ${{ step.stdout }} changesetTemplate: # [...] # If we have an `outputs.projectName` we use it, otherwise we append the path # of the workspace. If the path is emtpy (as is the case in the root folder), # we ignore it. branch: | ${{ if eq outputs.projectName "" }} ${{ join_if "-" "thorsten/workspace-discovery" (replace steps.path "/" "-") }} ${{ else }} thorsten/workspace-discovery-${{ outputs.projectName }} ${{ end }} ``` ## Process search result paths with script ```yaml steps: - run: | for result in "${{ join repository.search_result_paths " " }}"; do ruby /tmp/script "${result}" > "${result}.new" mv ${result}.new "${result}" done; container: ruby files: /tmp/script: | #! /usr/bin/env ruby require 'yaml'; content = YAML.load(ARGF.read) content['batchchanges'] = 'say hello' puts YAML.dump(content) ``` ## Use separate file as config file for command ```yaml steps: - run: comby -in-place -matcher .go -config /tmp/comby-conf.toml -f ${{ join repository.search_result_paths "," }} container: comby/comby files: /tmp/comby-conf.toml: | [log_to_log15] match='log.Printf(":[format]", :[args])' rewrite='log15.Warn(":[format]", :[args])' rule='where rewrite :[format] { "%:[[_]] " -> "" }, rewrite :[format] { " %:[[_]]" -> "" }, rewrite :[args] { ":[arg~[a-zA-Z0-9.()]+]" -> "\":[arg]\", :[arg]" }' ``` ## Publish only changesets on specific branches ```yaml changesetTemplate: # [...] published: - github.com/my-org/my-repo@my-branch-name: draft ``` ## Create new files in repository ```yaml steps: - run: cat /tmp/global-gitignore >> .gitignore container: alpine:3 files: /tmp/global-gitignore: | # Vim *.swp # JetBrains/IntelliJ .idea # Emacs *~ \#*\# /.emacs.desktop /.emacs.desktop.lock .\#* .dir-locals.el ``` ## Execute steps only in repositories matching name ```yaml steps: # [... other steps ...] - run: echo "name contains sourcegraph-testing" >> message.txt if: ${{ matches repository.name "*sourcegraph-testing*" }} container: alpine:3 ``` ## Execute steps based on output of previous command ```yaml steps: - run: if [[ -f "go.mod" ]]; then echo "true"; else echo "false"; fi container: alpine:3 outputs: goModExists: value: ${{ step.stdout }} - run: go fmt ./... container: golang if: ${{ outputs.goModExists }} ``` ## Write a GitHub Actions workflow that includes [GitHub expression syntax](https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions) ```yaml steps: - container: alpine:3 run: | #!/usr/bin/env bash mkdir -p .github/workflows cat <Learn how to use and configure command recordings from your Sourcegraph instance.
Command recording allows site admins to view all git operations executed on a repository. When enabled, Sourcegraph will record metadata about all git commands run on a repository in Redis, including: - Command executed (with sensitive information redacted) - Execution time - Duration of execution - Success state - Output This provides visibility into git operations performed by Sourcegraph on a repository, which can be useful for debugging and monitoring.  To enable command recording: - Go to [**Site Admin > Site Configuration**](/admin/config/site_config) - Add a `gitRecorder` object to the configuration object ```json { // [...] "gitRecorder": { // the amount of commands to record per repo "size": 30, // repositories to record commands. This can either be a wildcard '*' // to record commands for all repositories or a list of repositories "repos": ["*"], // git commands to exclude from recording. We exclude the // commands below by default. "ignoredGitCommands": [ "show", "rev-parse", "log", "diff", "ls-tree" ] } } ``` Once enabled, site admins can view recorded commands for a repository via the repository's settings page in the Site Admin UI. Recorded commands include **start time**, **duration**, **exit status**, **command executed**, **directory**, and **output**. Sensitive information like usernames, passwords, and tokens are automatically redacted from the command and output. Command recording provides visibility into Sourcegraph's interactions with repositories without requiring modifications to Sourcegraph's core Git operations. ### Potential risks Enabling command recording will increase disk usage in Redis, depending on the number of repositories and the size of the recording set. Since recorded commands are stored in Redis, setting the `size` to a very large number or enabling recording on many repositories could cause the Redis database to fill up quickly. Depending on your configuration, Redis might evict data from the database when it is full, impacting other parts of Sourcegraph that rely on Redis. This could cause Sourcegraph to experience degraded performance or instability. To avoid issues, proceeding cautiously and starting with a smaller `size` and number of repositories is recommended. Monitor your Redis memory usage over time and slowly increase the recording `size` and repositories. Tune the configuration based on your instance size and memory available.Serves all end-user browser and API requests.
To see this dashboard, visit `/-/debug/grafana/d/frontend/frontend` on your Sourcegraph instance. ### Frontend: Search at a glance #### frontend: 99th_percentile_search_request_duration99th percentile successful search request duration over 5m
Refer to the [alerts reference](alerts#frontend-99th-percentile-search-request-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful search request duration over 5m
Refer to the [alerts reference](alerts#frontend-90th-percentile-search-request-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Hard timeout search responses every 5m
Refer to the [alerts reference](alerts#frontend-hard-timeout-search-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Hard error search responses every 5m
Refer to the [alerts reference](alerts#frontend-hard-error-search-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Partial timeout search responses every 5m
Refer to the [alerts reference](alerts#frontend-partial-timeout-search-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Search alert user suggestions shown every 5m
Refer to the [alerts reference](alerts#frontend-search-alert-user-suggestions) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile page load latency over all routes over 10m
Refer to the [alerts reference](alerts#frontend-page-load-latency) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100020` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile code-intel successful search request duration over 5m
Refer to the [alerts reference](alerts#frontend-99th-percentile-search-codeintel-request-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile code-intel successful search request duration over 5m
Refer to the [alerts reference](alerts#frontend-90th-percentile-search-codeintel-request-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Hard timeout search code-intel responses every 5m
Refer to the [alerts reference](alerts#frontend-hard-timeout-search-codeintel-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Hard error search code-intel responses every 5m
Refer to the [alerts reference](alerts#frontend-hard-error-search-codeintel-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Partial timeout search code-intel responses every 5m
Refer to the [alerts reference](alerts#frontend-partial-timeout-search-codeintel-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Search code-intel alert user suggestions shown every 5m
Refer to the [alerts reference](alerts#frontend-search-codeintel-alert-user-suggestions) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful search API request duration over 5m
Refer to the [alerts reference](alerts#frontend-99th-percentile-search-api-request-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful search API request duration over 5m
Refer to the [alerts reference](alerts#frontend-90th-percentile-search-api-request-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Hard error search API responses every 5m
Refer to the [alerts reference](alerts#frontend-hard-error-search-api-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Partial timeout search API responses every 5m
Refer to the [alerts reference](alerts#frontend-partial-timeout-search-api-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Search API alert user suggestions shown every 5m
Refer to the [alerts reference](alerts#frontend-search-api-alert-user-suggestions) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "frontend" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum duration since last successful site configuration update (all "frontend" instances)
Refer to the [alerts reference](alerts#frontend-frontend-site-configuration-duration-since-last-successful-update-by-instance) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Aggregate graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100403` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful graphql operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate enqueuer operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful enqueuer operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate enqueuer operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate enqueuer operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100503` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Enqueuer operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful enqueuer operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Enqueuer operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Enqueuer operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100603` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100612` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100613` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100702` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100703` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100802` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100803` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100812` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100813` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful client operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100902` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100903` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100910` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful client operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100911` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100912` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100913` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101303` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101403` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101500` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101501` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101502` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101503` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101510` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile successful graphql operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101511` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101512` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101513` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101603` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101612` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101613` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101702` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101703` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101710` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101711` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101712` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101713` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101802` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101803` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate http handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101900` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful http handler operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101901` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate http handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101902` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate http handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101903` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Http handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101910` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful http handler operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101911` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Http handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101912` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Http handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101913` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Migration handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful migration handler operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Migration handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Migration handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Migration handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful migration handler operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Migration handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Migration handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102200` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102201` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102210` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102211` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102220` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102221` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102222` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102230` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102231` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102232` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102240` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102241` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102242` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102250` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102260` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102300` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102301` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102302` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "zoekt_configuration" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102310` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "zoekt_configuration" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102311` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "zoekt_configuration" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102312` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "zoekt_configuration" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102400` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "zoekt_configuration" clients, broken out per method. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102401` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "zoekt_configuration" clients, broken out per method This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102402` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102500` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102501` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102510` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102511` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102520` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102521` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102522` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102530` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102531` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102532` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102540` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102541` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102542` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102550` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102560` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102600` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102601` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102602` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "internal_api" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102610` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "internal_api" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102611` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "internal_api" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102612` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "internal_api" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102700` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "internal_api" clients, broken out per method. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102701` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "internal_api" clients, broken out per method This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102702` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Internal indexed search error responses every 5m
Refer to the [alerts reference](alerts#frontend-internal-indexed-search-error-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Internal unindexed search error responses every 5m
Refer to the [alerts reference](alerts#frontend-internal-unindexed-search-error-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful gitserver query duration over 5m
Refer to the [alerts reference](alerts#frontend-99th-percentile-gitserver-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102810` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Gitserver error responses every 5m
Refer to the [alerts reference](alerts#frontend-gitserver-error-responses) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102811` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Warning test alert metric
Refer to the [alerts reference](alerts#frontend-observability-test-alert-warning) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102820` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Critical test alert metric
Refer to the [alerts reference](alerts#frontend-observability-test-alert-critical) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102821` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Rate of API requests to sign-in
Rate (QPS) of requests to sign-in This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102900` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99 percentile of sign-in latency
99% percentile of sign-in latency This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102901` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Percentage of sign-in requests by http code
Percentage of sign-in requests grouped by http code This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102902` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of API requests to sign-up
Rate (QPS) of requests to sign-up This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102910` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99 percentile of sign-up latency
99% percentile of sign-up latency This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102911` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Percentage of sign-up requests by http code
Percentage of sign-up requests grouped by http code This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102912` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of API requests to sign-out
Rate (QPS) of requests to sign-out This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102920` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99 percentile of sign-out latency
99% percentile of sign-out latency This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102921` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Percentage of sign-out requests that return non-303 http code
Percentage of sign-out requests grouped by http code This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102922` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of failed sign-in attempts
Failed sign-in attempts per minute This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102930` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of account lockouts
Account lockouts per minute This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102931` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of external HTTP requests by host over 1m
Shows the rate of external HTTP requests made by Sourcegraph to other services, broken down by host. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103000` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of external HTTP requests by host and response code over 1m
Shows the rate of external HTTP requests made by Sourcegraph to other services, broken down by host and response code. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103010` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of API requests to cody endpoints (excluding GraphQL)
Rate (QPS) of requests to cody related endpoints. completions.stream is for the conversational endpoints. completions.code is for the code auto-complete endpoints. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103100` on your Sourcegraph instance.Cryptographic requests to Cloud KMS every 1m
Refer to the [alerts reference](alerts#frontend-cloudkms-cryptographic-requests) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103200` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average encryption cache hit ratio per workload
- Encryption cache hit ratio (hits/(hits+misses)) - minimum across all instances of a workload. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103201` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of encryption cache evictions - sum across all instances of a given workload
- Rate of encryption cache evictions (caused by cache exceeding its maximum size) - sum across all instances of a workload This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103202` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Maximum open
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103300` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Established
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103301` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Used
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103310` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Idle
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103311` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Mean blocked seconds per conn request
Refer to the [alerts reference](alerts#frontend-mean-blocked-seconds-per-conn-request) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103320` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetMaxIdleConns
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103330` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxLifetime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103331` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxIdleTime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103332` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod (frontend\|sourcegraph-frontend)` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p (frontend\|sourcegraph-frontend)`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' (frontend\|sourcegraph-frontend)` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the (frontend|sourcegraph-frontend) container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs (frontend\|sourcegraph-frontend)` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103400` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#frontend-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103401` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage by instance
Refer to the [alerts reference](alerts#frontend-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103402` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103403` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#frontend-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103500` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#frontend-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103501` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#frontend-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103510` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#frontend-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103511` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#frontend-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103512` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#frontend-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103600` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#frontend-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103601` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Percentage pods available
Refer to the [alerts reference](alerts#frontend-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103700` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Total number of search clicks over 6h
The total number of search clicks across all search types over a 6 hour window. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103800` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Percent of clicks on top search result over 6h
The percent of clicks that were on the top search result, excluding searches with very few results (3 or fewer). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103801` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Percent of clicks on top 3 search results over 6h
The percent of clicks that were on the first 3 search results, excluding searches with very few results (3 or fewer). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103802` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Distribution of clicked search result type over 6h
The distribution of clicked search results by result type. At every point in time, the values should sum to 100. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103810` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Percent of zoekt searches that hit the flush time limit
The percent of Zoekt searches that hit the flush time limit. These searches don`t visit all matches, so they could be missing relevant results, or be non-deterministic. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103811` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Email delivery failure rate over 30 minutes
Refer to the [alerts reference](alerts#frontend-email-delivery-failures) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103900` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Total emails successfully delivered every 30 minutes
Total emails successfully delivered. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103910` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Emails successfully delivered every 30 minutes by source
Emails successfully delivered by source, i.e. product feature. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103911` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Mean successful sentinel search duration over 2h
Mean search duration for all successful sentinel queries Refer to the [alerts reference](alerts#frontend-mean-successful-sentinel-duration-over-2h) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Mean successful sentinel stream latency over 2h
Mean time to first result for all successful streaming sentinel queries Refer to the [alerts reference](alerts#frontend-mean-sentinel-stream-latency-over-2h) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful sentinel search duration over 2h
90th percentile search duration for all successful sentinel queries Refer to the [alerts reference](alerts#frontend-90th-percentile-successful-sentinel-duration-over-2h) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful sentinel stream latency over 2h
90th percentile time to first result for all successful streaming sentinel queries Refer to the [alerts reference](alerts#frontend-90th-percentile-sentinel-stream-latency-over-2h) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Mean successful sentinel search duration by query
Mean search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104020` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Mean successful sentinel stream latency by query
Mean time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104021` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful sentinel search duration by query
90th percentile search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104030` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful sentinel stream latency by query
90th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104031` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile unsuccessful sentinel search duration by query
90th percentile search duration of _unsuccessful_ sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104040` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*75th percentile successful sentinel search duration by query
75th percentile search duration of successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104050` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*75th percentile successful sentinel stream latency by query
75th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104051` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*75th percentile unsuccessful sentinel search duration by query
75th percentile search duration of _unsuccessful_ sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104060` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Unsuccessful status rate
The rate of unsuccessful sentinel queries, broken down by failure type. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104070` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*P95 time to handle incoming webhooks
p95 response time to incoming webhook requests from code hosts. Increases in response time can point to too much load on the database to keep up with the incoming requests. See this documentation page for more details on webhook requests: (https://sourcegraph.com/docs/admin/config/webhooks/incoming) This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104100` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate search aggregations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful search aggregations operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate search aggregations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate search aggregations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Search aggregations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful search aggregations operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Search aggregations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Search aggregations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Stores, manages, and operates Git repositories.
To see this dashboard, visit `/-/debug/grafana/d/gitserver/gitserver` on your Sourcegraph instance. #### gitserver: go_routinesGo routines
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container CPU throttling time %
- A high value indicates that the container is spending too much time waiting for CPU cycles. Refer to the [alerts reference](alerts#gitserver-cpu-throttling-time) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Cpu usage seconds
- This value should not exceed 75% of the CPU limit over a longer period of time. - We cannot alert on this as we don`t know the resource allocation. - If this value is high for a longer time, consider increasing the CPU limit for the container. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Disk space remaining
Indicates disk space remaining for each gitserver instance, which is used to determine when to start evicting least-used repository clones from disk (default 10%, configured by `SRC_REPOS_DESIRED_PERCENT_FREE`). Refer to the [alerts reference](alerts#gitserver-disk-space-remaining) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100020` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Git commands running on each gitserver instance
A high value signals load. Refer to the [alerts reference](alerts#gitserver-running-git-commands) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100030` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of git commands received
per second rate per command This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100031` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Echo test command duration
Refer to the [alerts reference](alerts#gitserver-echo-command-duration-test) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100040` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Number of times a repo corruption has been identified
A non-null value here indicates that a problem has been detected with the gitserver repository storage. Repository corruptions are never expected. This is a real issue. Gitserver should try to recover from them by recloning repositories, but this may take a while depending on repo size. Refer to the [alerts reference](alerts#gitserver-repo-corrupted) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100041` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repository clone queue size
Refer to the [alerts reference](alerts#gitserver-repository-clone-queue-size) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100050` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Number of repositories on gitserver
This metric is only for informational purposes. It indicates the total number of repositories on gitserver. It does not indicate any problems with the instance. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100051` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile gitservice request duration aggregate
A high value means any internal service trying to clone a repo from gitserver is slowed down. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile gitservice request duration per shard
A high value means any internal service trying to clone a repo from gitserver is slowed down. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile gitservice error request duration aggregate
95th percentile gitservice error request duration aggregate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile gitservice error request duration per shard
95th percentile gitservice error request duration per shard This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate gitservice request rate
Aggregate gitservice request rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100120` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Gitservice request rate per shard
Per shard gitservice request rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100121` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate gitservice request error rate
Aggregate gitservice request error rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100130` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Gitservice request error rate per shard
Per shard gitservice request error rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100131` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate gitservice requests running
Aggregate gitservice requests running This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100140` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Gitservice requests running per shard
Per shard gitservice requests running This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100141` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Janitor process is running
1, if the janitor process is currently running This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile job run duration
95th percentile job run duration This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Failures over 5m (by job)
the rate of failures over 5m (by job) This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100220` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories removed due to disk pressure
Repositories removed due to disk pressure This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100230` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories removed because they are not defined in the DB
Repositoriess removed because they are not defined in the DB This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100240` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Successful sg maintenance jobs over 1h (by reason)
the rate of successful sg maintenance jobs and the reason why they were triggered This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100250` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Successful git prune jobs over 1h
the rate of successful git prune jobs over 1h and whether they were skipped This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100260` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Mean time until first result is sent
Mean latency (time to first result) of gitserver search requests This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Mean search duration
Mean duration of gitserver search requests This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Rate of searches run by pod
The rate of searches executed on gitserver by pod This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Number of searches currently running by pod
The number of searches currently executing on gitserver by pod This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Number of concurrently running backend operations
The number of requests that are currently being handled by gitserver backend layer, at the point in time of scraping. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate successful operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100413` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100420` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile successful operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100421` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100422` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100423` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100503` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile successful graphql operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100513` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Read throughput over 1m (per instance)
The amount of data that was read from the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Write throughput over 1m (per instance)
The amount of data that was written to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100620` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100621` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average read request size over 1m (per instance)
The average size of read requests that were issued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100630` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average write request size over 1m (per instance)
The average size of write requests that were issued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100631` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100640` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100641` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz). Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100650` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100720` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100721` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100722` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100730` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100731` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100732` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100740` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100741` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100742` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100750` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100760` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100802` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "gitserver" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "gitserver" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "gitserver" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100812` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "gitserver" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "gitserver" clients, broken out per method. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "gitserver" clients, broken out per method This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100902` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "gitserver" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum duration since last successful site configuration update (all "gitserver" instances)
Refer to the [alerts reference](alerts#gitserver-gitserver-site-configuration-duration-since-last-successful-update-by-instance) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Aggregate invocations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful invocations operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful invocations operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Requests per second, by route, when status code is 200
The number of healthy HTTP requests per second to internal HTTP api This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101300` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Requests per second, by route, when status code is not 200
The number of unhealthy HTTP requests per second to internal HTTP api This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101301` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Requests per second, by status code
The number of HTTP requests per second by code This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101302` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile duration by route, when status code is 200
The 95th percentile duration by route when the status code is 200 This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101310` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile duration by route, when status code is not 200
The 95th percentile duration by route when the status code is not 200 This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101311` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Maximum open
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101400` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Established
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101401` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Used
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101410` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Idle
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101411` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Mean blocked seconds per conn request
Refer to the [alerts reference](alerts#gitserver-mean-blocked-seconds-per-conn-request) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101420` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Closed by SetMaxIdleConns
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101430` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Closed by SetConnMaxLifetime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101431` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Closed by SetConnMaxIdleTime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101432` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod gitserver` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p gitserver`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' gitserver` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the gitserver container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs gitserver` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101500` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#gitserver-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101501` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container memory usage by instance
Refer to the [alerts reference](alerts#gitserver-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101502` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101503` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#gitserver-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101600` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container memory usage (1d maximum) by instance
Git Server is expected to use up all the memory it is provided. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101601` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#gitserver-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101610` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container memory usage (5m maximum) by instance
Git Server is expected to use up all the memory it is provided. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101611` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#gitserver-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101612` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#gitserver-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101700` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#gitserver-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101701` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Percentage pods available
Refer to the [alerts reference](alerts#gitserver-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101800` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Postgres metrics, exported from postgres_exporter (not available on server).
To see this dashboard, visit `/-/debug/grafana/d/postgres/postgres` on your Sourcegraph instance. #### postgres: connectionsActive connections
Refer to the [alerts reference](alerts#postgres-connections) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Connection in use
Refer to the [alerts reference](alerts#postgres-usage-connections-percentage) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum transaction durations
Refer to the [alerts reference](alerts#postgres-transaction-durations) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Database availability
A non-zero value indicates the database is online. Refer to the [alerts reference](alerts#postgres-postgres-up) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Invalid indexes (unusable by the query planner)
A non-zero value indicates the that Postgres failed to build an index. Expect degraded performance until the index is manually rebuilt. Refer to the [alerts reference](alerts#postgres-invalid-indexes) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Errors scraping postgres exporter
This value indicates issues retrieving metrics from postgres_exporter. Refer to the [alerts reference](alerts#postgres-pg-exporter-err) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Active schema migration
A 0 value indicates that no migration is in progress. Refer to the [alerts reference](alerts#postgres-migration-in-progress) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Table size
Total size of this table This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Table bloat ratio
Estimated bloat ratio of this table (high bloat = high overhead) This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Index size
Total size of this index This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Index bloat ratio
Estimated bloat ratio of this index (high bloat = high overhead) This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#postgres-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#postgres-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#postgres-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#postgres-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#postgres-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Percentage pods available
Refer to the [alerts reference](alerts#postgres-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Handles conversion of uploaded precise code intelligence bundles.
To see this dashboard, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker` on your Sourcegraph instance. ### Precise Code Intel Worker: Codeintel: LSIF uploads #### precise-code-intel-worker: codeintel_upload_queue_sizeUnprocessed upload record queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed upload record queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed upload record queue longest time in queue
Refer to the [alerts reference](alerts#precise-code-intel-worker-codeintel-upload-queued-max-age) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handler active handlers
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Sum of upload sizes in bytes being processed by each precise code-intel worker instance
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100403` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful client operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100503` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful client operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100603` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100612` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100613` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Maximum open
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Established
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Used
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Idle
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Mean blocked seconds per conn request
Refer to the [alerts reference](alerts#precise-code-intel-worker-mean-blocked-seconds-per-conn-request) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100720` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetMaxIdleConns
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100730` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxLifetime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100731` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxIdleTime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100732` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod precise-code-intel-worker` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p precise-code-intel-worker`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' precise-code-intel-worker` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the precise-code-intel-worker container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs precise-code-intel-worker` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#precise-code-intel-worker-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage by instance
Refer to the [alerts reference](alerts#precise-code-intel-worker-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100802` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100803` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#precise-code-intel-worker-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#precise-code-intel-worker-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#precise-code-intel-worker-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100910` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#precise-code-intel-worker-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100911` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#precise-code-intel-worker-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100912` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#precise-code-intel-worker-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#precise-code-intel-worker-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Percentage pods available
Refer to the [alerts reference](alerts#precise-code-intel-worker-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Metrics from both redis databases.
To see this dashboard, visit `/-/debug/grafana/d/redis/redis` on your Sourcegraph instance. ### Redis: Redis Store #### redis: redis-store_upRedis-store availability
A value of 1 indicates the service is currently running Refer to the [alerts reference](alerts#redis-redis-store-up) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Redis-cache availability
A value of 1 indicates the service is currently running Refer to the [alerts reference](alerts#redis-redis-cache-up) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#redis-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#redis-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#redis-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#redis-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#redis-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#redis-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#redis-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#redis-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#redis-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#redis-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Percentage pods available
Refer to the [alerts reference](alerts#redis-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Percentage pods available
Refer to the [alerts reference](alerts#redis-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Manages background processes.
To see this dashboard, visit `/-/debug/grafana/d/worker/worker` on your Sourcegraph instance. ### Worker: Active jobs #### worker: worker_job_countNumber of worker instances running each job
The number of worker instances running each job type. It is necessary for each job type to be managed by at least one worker instance. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100000` on your Sourcegraph instance.Number of worker instances running the codeintel-upload-janitor job
Refer to the [alerts reference](alerts#worker-worker-job-codeintel-upload-janitor-count) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Number of worker instances running the codeintel-commitgraph-updater job
Refer to the [alerts reference](alerts#worker-worker-job-codeintel-commitgraph-updater-count) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Number of worker instances running the codeintel-autoindexing-scheduler job
Refer to the [alerts reference](alerts#worker-worker-job-codeintel-autoindexing-scheduler-count) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Percentage of database records encrypted at rest
Percentage of encrypted database records This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Database records encrypted every 5m
Number of encrypted database records every 5m This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Database records decrypted every 5m
Number of encrypted database records every 5m This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Encryption operation errors every 5m
Number of database record encryption/decryption errors every 5m This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repository queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Repository queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Repository queue longest time in queue
Refer to the [alerts reference](alerts#worker-codeintel-commit-graph-queued-max-age) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Update operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful update operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Update operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Update operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Dependency index job queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Dependency index job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Dependency index job queue longest time in queue
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handler active handlers
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Auto-indexing job scheduler operations every 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful auto-indexing job scheduler operation duration distribution over 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Auto-indexing job scheduler operation errors every 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Auto-indexing job scheduler operation error rate over 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100603` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100702` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100703` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100712` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100713` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100802` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100803` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100812` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100813` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100902` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100903` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful client operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful client operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate insert operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful insert operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate insert operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate insert operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Insert operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful insert operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Insert operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Insert operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Total number of user permissions syncs
Indicates the total number of user permissions sync completed. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Number of user permissions syncs [5m]
Indicates the number of users permissions syncs completed. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101201` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Number of first user permissions syncs [5m]
Indicates the number of permissions syncs done for the first time for the user. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101202` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Total number of repo permissions syncs
Indicates the total number of repo permissions sync completed. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101210` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Number of repo permissions syncs over 5m
Indicates the number of repos permissions syncs completed. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101211` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Number of first repo permissions syncs over 5m
Indicates the number of permissions syncs done for the first time for the repo. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101212` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Max duration between two consecutive permissions sync for user
Indicates the max delay between two consecutive permissions sync for a user during the period. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101220` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Max duration between two consecutive permissions sync for repo
Indicates the max delay between two consecutive permissions sync for a repo during the period. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101221` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Max duration between user creation and first permissions sync
Indicates the max delay between user creation and their permissions sync This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101230` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Max duration between repo creation and first permissions sync over 1m
Indicates the max delay between repo creation and their permissions sync This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101231` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Number of permissions found during user/repo permissions sync
Indicates the number permissions found during users/repos permissions sync. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101240` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average number of permissions found during permissions sync per user/repo
Indicates the average number permissions found during permissions sync per user/repo. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101241` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Number of entities with outdated permissions
Refer to the [alerts reference](alerts#worker-perms-syncer-outdated-perms) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101250` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th permissions sync duration
Refer to the [alerts reference](alerts#worker-perms-syncer-sync-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101260` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Permissions sync error rate
Refer to the [alerts reference](alerts#worker-perms-syncer-sync-errors) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101270` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Total number of repos scheduled for permissions sync
Indicates how many repositories have been scheduled for a permissions sync. More about repository permissions synchronization [here](https://sourcegraph.com/docs/admin/permissions/syncing#scheduling) This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101271` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101300` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101301` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101302` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101303` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101310` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile successful graphql operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101311` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101312` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101313` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101403` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101502` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101503` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101603` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101702` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101703` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101802` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101803` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101900` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101901` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101902` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101903` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Unprocessed executor job queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed executor job queue longest time in queue
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif upload records reset to queued state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif upload records reset to errored state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif upload operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif index records reset to queued state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif index records reset to errored state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif index operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif dependency index records reset to queued state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif dependency index records reset to errored state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif dependency index operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Code insights query runner queue queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Code insights query runner queue queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Handler active handlers
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Insights query runner queue records reset to queued state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Insights query runner queue records reset to errored state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Insights query runner queue operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102702` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102703` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102710` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102711` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102712` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102713` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Insights queue size that is not utilized (not processing)
Any value on this panel indicates code insights is not processing queries from its queue. This observable and alert only fire if there are records in the queue and there have been no dequeue attempts for 30 minutes. Refer to the [alerts reference](alerts#worker-insights-queue-unutilized-size) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Maximum open
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102900` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Established
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102901` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Used
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102910` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Idle
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102911` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Mean blocked seconds per conn request
Refer to the [alerts reference](alerts#worker-mean-blocked-seconds-per-conn-request) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102920` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetMaxIdleConns
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102930` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxLifetime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102931` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxIdleTime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102932` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod worker` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p worker`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' worker` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the worker container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs worker` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#worker-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage by instance
Refer to the [alerts reference](alerts#worker-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#worker-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#worker-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#worker-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#worker-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#worker-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#worker-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#worker-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Percentage pods available
Refer to the [alerts reference](alerts#worker-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103400` on your Sourcegraph instance. *Managed by the [Sourcegraph own team](https://handbook.sourcegraph.com/departments/engineering/teams/own).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103401` on your Sourcegraph instance. *Managed by the [Sourcegraph own team](https://handbook.sourcegraph.com/departments/engineering/teams/own).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103402` on your Sourcegraph instance. *Managed by the [Sourcegraph own team](https://handbook.sourcegraph.com/departments/engineering/teams/own).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103403` on your Sourcegraph instance. *Managed by the [Sourcegraph own team](https://handbook.sourcegraph.com/departments/engineering/teams/own).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103410` on your Sourcegraph instance. *Managed by the [Sourcegraph own team](https://handbook.sourcegraph.com/departments/engineering/teams/own).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103411` on your Sourcegraph instance. *Managed by the [Sourcegraph own team](https://handbook.sourcegraph.com/departments/engineering/teams/own).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103412` on your Sourcegraph instance. *Managed by the [Sourcegraph own team](https://handbook.sourcegraph.com/departments/engineering/teams/own).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103413` on your Sourcegraph instance. *Managed by the [Sourcegraph own team](https://handbook.sourcegraph.com/departments/engineering/teams/own).*Handler active handlers
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Own repo indexer queue records reset to queued state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Own repo indexer queue records reset to errored state every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Own repo indexer queue operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Own index job scheduler operations every 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful own index job scheduler operation duration over 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Own index job scheduler operation errors every 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103702` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Own index job scheduler operation error rate over 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103703` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "worker" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103800` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum duration since last successful site configuration update (all "worker" instances)
Refer to the [alerts reference](alerts#worker-worker-site-configuration-duration-since-last-successful-update-by-instance) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=103801` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Manages interaction with code hosts, instructs Gitserver to update repositories.
To see this dashboard, visit `/-/debug/grafana/d/repo-updater/repo-updater` on your Sourcegraph instance. ### Repo Updater: Repositories #### repo-updater: syncer_sync_last_timeTime since last sync
A high value here indicates issues synchronizing repo metadata. If the value is persistently high, make sure all external services have valid tokens. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Time since oldest sync
Refer to the [alerts reference](alerts#repo-updater-src-repoupdater-max-sync-backoff) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Site level external service sync error rate
Refer to the [alerts reference](alerts#repo-updater-src-repoupdater-syncer-sync-errors-total) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repo metadata sync was started
Refer to the [alerts reference](alerts#repo-updater-syncer-sync-start) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th repositories sync duration
Refer to the [alerts reference](alerts#repo-updater-syncer-sync-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th repositories source duration
Refer to the [alerts reference](alerts#repo-updater-source-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories synced
Refer to the [alerts reference](alerts#repo-updater-syncer-synced-repos) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100020` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories sourced
Refer to the [alerts reference](alerts#repo-updater-sourced-repos) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100021` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories purge failed
Refer to the [alerts reference](alerts#repo-updater-purge-failed) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100030` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories scheduled due to hitting a deadline
Refer to the [alerts reference](alerts#repo-updater-sched-auto-fetch) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100040` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories scheduled due to user traffic
Check repo-updater logs if this value is persistently high. This does not indicate anything if there are no user added code hosts. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100041` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories managed by the scheduler
Refer to the [alerts reference](alerts#repo-updater-sched-known-repos) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100050` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of growth of update queue length over 5 minutes
Refer to the [alerts reference](alerts#repo-updater-sched-update-queue-length) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100051` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Scheduler loops
Refer to the [alerts reference](alerts#repo-updater-sched-loops) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100052` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repos that haven't been fetched in more than 8 hours
Refer to the [alerts reference](alerts#repo-updater-src-repoupdater-stale-repos) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100060` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Repositories schedule error rate
Refer to the [alerts reference](alerts#repo-updater-sched-error) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100061` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*The total number of external services
Refer to the [alerts reference](alerts#repo-updater-src-repoupdater-external-services-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*The total number of queued sync jobs
Refer to the [alerts reference](alerts#repo-updater-repoupdater-queued-sync-jobs-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*The total number of completed sync jobs
Refer to the [alerts reference](alerts#repo-updater-repoupdater-completed-sync-jobs-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*The percentage of external services that have failed their most recent sync
Refer to the [alerts reference](alerts#repo-updater-repoupdater-errored-sync-jobs-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Remaining calls to GitHub graphql API before hitting the rate limit
Refer to the [alerts reference](alerts#repo-updater-github-graphql-rate-limit-remaining) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100120` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Remaining calls to GitHub rest API before hitting the rate limit
Refer to the [alerts reference](alerts#repo-updater-github-rest-rate-limit-remaining) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100121` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Remaining calls to GitHub search API before hitting the rate limit
Refer to the [alerts reference](alerts#repo-updater-github-search-rate-limit-remaining) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100122` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Time spent waiting for the GitHub graphql API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100130` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Time spent waiting for the GitHub rest API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100131` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Time spent waiting for the GitHub search API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100132` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Remaining calls to GitLab rest API before hitting the rate limit
Refer to the [alerts reference](alerts#repo-updater-gitlab-rest-rate-limit-remaining) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100140` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Time spent waiting for the GitLab rest API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100141` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile time spent successfully waiting on our internal rate limiter
Indicates how long we`re waiting on our internal rate limiter when communicating with a code host This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100150` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Rate of failures waiting on our internal rate limiter
The rate at which we fail our internal rate limiter. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100151` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile successful graphql operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Graphql operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100213` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100403` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Aggregate invocations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100503` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful invocations operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate invocations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100603` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful invocations operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100612` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Invocations operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100613` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100720` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100721` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100722` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100730` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100731` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100732` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100740` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100741` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100742` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100750` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100760` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "repo_updater" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "repo_updater" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "repo_updater" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100802` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "repo_updater" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repo_updater" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "repo_updater" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repo_updater" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "repo_updater" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repo_updater" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100812` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "repo_updater" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "repo_updater" clients, broken out per method. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "repo_updater" clients, broken out per method This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100902` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "repo_updater" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum duration since last successful site configuration update (all "repo_updater" instances)
Refer to the [alerts reference](alerts#repo-updater-repo-updater-site-configuration-duration-since-last-successful-update-by-instance) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Requests per second, by route, when status code is 200
The number of healthy HTTP requests per second to internal HTTP api This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Requests per second, by route, when status code is not 200
The number of unhealthy HTTP requests per second to internal HTTP api This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Requests per second, by status code
The number of HTTP requests per second by code This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101102` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile duration by route, when status code is 200
The 95th percentile duration by route when the status code is 200 This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101110` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th percentile duration by route, when status code is not 200
The 95th percentile duration by route when the status code is not 200 This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101111` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Maximum open
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Established
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101201` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Used
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101210` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Idle
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101211` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Mean blocked seconds per conn request
Refer to the [alerts reference](alerts#repo-updater-mean-blocked-seconds-per-conn-request) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101220` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Closed by SetMaxIdleConns
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101230` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Closed by SetConnMaxLifetime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101231` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Closed by SetConnMaxIdleTime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101232` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod repo-updater` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p repo-updater`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' repo-updater` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the repo-updater container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs repo-updater` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101300` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#repo-updater-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101301` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container memory usage by instance
Refer to the [alerts reference](alerts#repo-updater-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101302` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101303` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#repo-updater-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101400` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#repo-updater-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101401` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#repo-updater-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101410` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#repo-updater-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101411` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#repo-updater-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101412` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#repo-updater-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101500` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#repo-updater-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101501` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Percentage pods available
Refer to the [alerts reference](alerts#repo-updater-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101600` on your Sourcegraph instance. *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*Performs unindexed searches (diff and commit search, text search for unindexed branches).
To see this dashboard, visit `/-/debug/grafana/d/searcher/searcher` on your Sourcegraph instance. #### searcher: trafficRequests per second by code over 10m
This graph is the average number of requests per second searcher is experiencing over the last 10 minutes. The code is the HTTP Status code. 200 is success. We have a special code "canceled" which is common when doing a large search request and we find enough results before searching all possible repos. Note: A search query is translated into an unindexed search query per unique (repo, commit). This means a single user query may result in thousands of requests to searcher. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Requests per second per replica over 10m
This graph is the average number of requests per second searcher is experiencing over the last 10 minutes broken down per replica. The code is the HTTP Status code. 200 is success. We have a special code "canceled" which is common when doing a large search request and we find enough results before searching all possible repos. Note: A search query is translated into an unindexed search query per unique (repo, commit). This means a single user query may result in thousands of requests to searcher. Refer to the [alerts reference](alerts#searcher-replica-traffic) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Amount of in-flight unindexed search requests (per instance)
This graph is the amount of in-flight unindexed search requests per instance. Consistently high numbers here indicate you may need to scale out searcher. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Unindexed search request errors every 5m by code
Refer to the [alerts reference](alerts#searcher-unindexed-search-request-errors) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Amount of in-flight unindexed search requests fetching code from gitserver (per instance)
Before we can search a commit we fetch the code from gitserver then cache it for future search requests. This graph is the current number of search requests which are in the state of fetching code from gitserver. Generally this number should remain low since fetching code is fast, but expect bursts. In the case of instances with a monorepo you would expect this number to stay low for the duration of fetching the code (which in some cases can take many minutes). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Amount of in-flight unindexed search requests waiting to fetch code from gitserver (per instance)
We limit the number of requests which can fetch code to prevent overwhelming gitserver. This gauge is the number of requests waiting to be allowed to speak to gitserver. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Amount of unindexed search requests that failed while fetching code from gitserver over 10m (per instance)
This graph should be zero since fetching happens in the background and will not be influenced by user timeouts/etc. Expected upticks in this graph are during gitserver rollouts. If you regularly see this graph have non-zero values please reach out to support. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Hybrid search final state over 10m
This graph is about our interactions with the search index (zoekt) to help complete unindexed search requests. Searcher will use indexed search for the files that have not changed between the unindexed commit and the index. This graph should mostly be "success". The next most common state should be "search-canceled" which happens when result limits are hit or the user starts a new search. Finally the next most common should be "diff-too-large", which happens if the commit is too far from the indexed commit. Otherwise other state should be rare and likely are a sign for further investigation. Note: On sourcegraph.com "zoekt-list-missing" is also common due to it indexing a subset of repositories. Otherwise every other state should occur rarely. For a full list of possible state see [recordHybridFinalState](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph%24+f:cmd/searcher+recordHybridFinalState). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Hybrid search retrying over 10m
Expectation is that this graph should mostly be 0. It will trigger if a user manages to do a search and the underlying index changes while searching or Zoekt goes down. So occasional bursts can be expected, but if this graph is regularly above 0 it is a sign for further investigation. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Read throughput over 1m (per instance)
The amount of data that was read from the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Write throughput over 1m (per instance)
The amount of data that was written to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100320` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100321` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average read request size over 1m (per instance)
The average size of read requests that were issued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100330` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average write request size over 1m (per instance)
The average size of write requests that were issued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100331` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100340` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100341` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz). Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100350` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100420` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100421` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100422` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100430` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100431` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100432` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100440` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100441` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100442` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100450` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100460` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "searcher" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "searcher" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "searcher" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "searcher" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "searcher" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "searcher" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "searcher" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "searcher" clients, broken out per method. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "searcher" clients, broken out per method This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100602` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "searcher" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum duration since last successful site configuration update (all "searcher" instances)
Refer to the [alerts reference](alerts#searcher-searcher-site-configuration-duration-since-last-successful-update-by-instance) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum open
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Established
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Used
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Idle
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Mean blocked seconds per conn request
Refer to the [alerts reference](alerts#searcher-mean-blocked-seconds-per-conn-request) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100820` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetMaxIdleConns
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100830` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxLifetime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100831` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxIdleTime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100832` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod searcher` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p searcher`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' searcher` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the searcher container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs searcher` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#searcher-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage by instance
Refer to the [alerts reference](alerts#searcher-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100902` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100903` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#searcher-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#searcher-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#searcher-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101010` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#searcher-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101011` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#searcher-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101012` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#searcher-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#searcher-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Percentage pods available
Refer to the [alerts reference](alerts#searcher-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Handles symbol searches for unindexed branches.
To see this dashboard, visit `/-/debug/grafana/d/symbols/symbols` on your Sourcegraph instance. ### Symbols: Codeintel: Symbols API #### symbols: codeintel_symbols_api_totalAggregate API operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful API operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate API operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate API operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*API operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful API operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*API operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*API operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*In-flight parse jobs
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Parser queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Parse queue timeouts
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Parse failures every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate parser operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful parser operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate parser operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate parser operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Parser operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100120` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful parser operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100121` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Parser operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100122` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Parser operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100123` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Size in bytes of the on-disk cache
no This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Cache eviction operations every 5m
no This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Cache eviction operation errors every 5m
no This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*In-flight repository fetch operations
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Repository fetch queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate fetcher operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful fetcher operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate fetcher operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate fetcher operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Fetcher operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100320` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful fetcher operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100321` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Fetcher operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100322` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Fetcher operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100323` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate gitserver client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful gitserver client operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate gitserver client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate gitserver client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100403` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Gitserver client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful gitserver client operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Gitserver client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Gitserver client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*95th percentile search request duration over 5m
The 95th percentile duration of search requests to Rockskip in seconds. Lower is better. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Number of in-flight search requests
The number of search requests currently being processed by Rockskip. If there is not much traffic and the requests are served very fast relative to the polling window of Prometheus, it possible that that this number is 0 even if there are search requests being processed. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Search request errors every 5m
The number of search requests that returned an error in the last 5 minutes. The errors tracked here are all application errors, grpc errors are not included. We generally want this to be 0. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*95th percentile index job duration over 5m
The 95th percentile duration of index jobs in seconds. The range of values is very large, because the metric measure quick delta updates as well as full index jobs. Lower is better. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Number of in-flight index jobs
The number of index jobs currently being processed by Rockskip. This includes delta updates as well as full index jobs. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Index job errors every 5m
The number of index jobs that returned an error in the last 5 minutes. If the errors are persistent, users will see alerts in the UI. The service logs will contain more detailed information about the kind of errors. We generally want this to be 0. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Number of repositories indexed by Rockskip
The number of repositories indexed by Rockskip. Apart from an initial transient phase in which many repos are being indexed, this number should be low and relatively stable and only increase by small increments. To verify if this number makes sense, compare ROCKSKIP_MIN_REPO_SIZE_MB with the repository sizes reported by gitserver_repos table. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100520` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100620` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100621` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100622` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100630` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100631` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100632` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100640` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100641` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100642` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100650` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100660` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "symbols" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "symbols" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "symbols" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100702` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "symbols" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "symbols" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "symbols" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "symbols" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "symbols" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "symbols" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100712` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "symbols" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "symbols" clients, broken out per method. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "symbols" clients, broken out per method This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100802` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "symbols" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum duration since last successful site configuration update (all "symbols" instances)
Refer to the [alerts reference](alerts#symbols-symbols-site-configuration-duration-since-last-successful-update-by-instance) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum open
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Established
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Used
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101010` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Idle
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101011` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Mean blocked seconds per conn request
Refer to the [alerts reference](alerts#symbols-mean-blocked-seconds-per-conn-request) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101020` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetMaxIdleConns
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101030` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxLifetime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101031` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Closed by SetConnMaxIdleTime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101032` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod symbols` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p symbols`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' symbols` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the symbols container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs symbols` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#symbols-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage by instance
Refer to the [alerts reference](alerts#symbols-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#symbols-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#symbols-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#symbols-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#symbols-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#symbols-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#symbols-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#symbols-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Percentage pods available
Refer to the [alerts reference](alerts#symbols-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/symbols/symbols?viewPanel=101400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Handles syntax highlighting for code files.
To see this dashboard, visit `/-/debug/grafana/d/syntect-server/syntect-server` on your Sourcegraph instance. #### syntect-server: syntax_highlighting_errorsSyntax highlighting errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Syntax highlighting timeouts every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Syntax highlighting panics every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Syntax highlighter worker deaths every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod syntect-server` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p syntect-server`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' syntect-server` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the syntect-server container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs syntect-server` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#syntect-server-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage by instance
Refer to the [alerts reference](alerts#syntect-server-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#syntect-server-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#syntect-server-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#syntect-server-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#syntect-server-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#syntect-server-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Percentage pods available
Refer to the [alerts reference](alerts#syntect-server-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Indexes repositories, populates the search index, and responds to indexed search queries.
To see this dashboard, visit `/-/debug/grafana/d/zoekt/zoekt` on your Sourcegraph instance. #### zoekt: total_repos_aggregateTotal number of repos (aggregate)
Sudden changes can be caused by indexing configuration changes. Additionally, a discrepancy between "index_num_assigned" and "index_queue_cap" could indicate a bug. Legend: - index_num_assigned: # of repos assigned to Zoekt - index_num_indexed: # of repos Zoekt has indexed - index_queue_cap: # of repos Zoekt is aware of, including those that it has finished indexing This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Total number of repos (per instance)
Sudden changes can be caused by indexing configuration changes. Additionally, a discrepancy between "index_num_assigned" and "index_queue_cap" could indicate a bug. Legend: - index_num_assigned: # of repos assigned to Zoekt - index_num_indexed: # of repos Zoekt has indexed - index_queue_cap: # of repos Zoekt is aware of, including those that it has finished processing This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*The number of repositories we stopped tracking over 5m (aggregate)
Repositories we stop tracking are soft-deleted during the next cleanup job. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*The number of repositories we stopped tracking over 5m (per instance)
Repositories we stop tracking are soft-deleted during the next cleanup job. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average resolve revision duration over 5m
Refer to the [alerts reference](alerts#zoekt-average-resolve-revision-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100020` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*The number of repositories we failed to get indexing options over 5m
When considering indexing a repository we ask for the index configuration from frontend per repository. The most likely reason this would fail is failing to resolve branch names to git SHAs. This value can spike up during deployments/etc. Only if you encounter sustained periods of errors is there an underlying issue. When sustained this indicates repositories will not get updated indexes. Refer to the [alerts reference](alerts#zoekt-get-index-options-error-increase) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100021` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 99th percentile of search request durations over the last minute (aggregated across all instances). Large duration spikes can be an indicator of saturation and / or a performance regression. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 90th percentile of search request durations over the last minute (aggregated across all instances). Large duration spikes can be an indicator of saturation and / or a performance regression. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 75th percentile of search request durations over the last minute (aggregated across all instances). Large duration spikes can be an indicator of saturation and / or a performance regression. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile indexed search duration over 1m (per instance)
This dashboard shows the 99th percentile of search request durations over the last minute (broken out per instance). Large duration spikes can be an indicator of saturation and / or a performance regression. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile indexed search duration over 1m (per instance)
This dashboard shows the 90th percentile of search request durations over the last minute (broken out per instance). Large duration spikes can be an indicator of saturation and / or a performance regression. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile indexed search duration over 1m (per instance)
This dashboard shows the 75th percentile of search request durations over the last minute (broken out per instance). Large duration spikes can be an indicator of saturation and / or a performance regression. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Amount of in-flight indexed search requests (aggregate)
This dashboard shows the current number of indexed search requests that are in-flight, aggregated across all instances. In-flight search requests include both running and queued requests. The number of in-flight requests can serve as a proxy for the general load that webserver instances are under. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100120` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Amount of in-flight indexed search requests (per instance)
This dashboard shows the current number of indexed search requests that are-flight, broken out per instance. In-flight search requests include both running and queued requests. The number of in-flight requests can serve as a proxy for the general load that webserver instances are under. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100121` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Rate of growth of in-flight indexed search requests over 1m (aggregate)
This dashboard shows the rate of growth of in-flight requests, aggregated across all instances. In-flight search requests include both running and queued requests. This metric gives a notion of how quickly the indexed-search backend is working through its request load (taking into account the request arrival rate and processing time). A sustained high rate of growth can indicate that the indexed-search backend is saturated. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100130` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Rate of growth of in-flight indexed search requests over 1m (per instance)
This dashboard shows the rate of growth of in-flight requests, broken out per instance. In-flight search requests include both running and queued requests. This metric gives a notion of how quickly the indexed-search backend is working through its request load (taking into account the request arrival rate and processing time). A sustained high rate of growth can indicate that the indexed-search backend is saturated. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100131` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Indexed search request errors every 5m by code
Refer to the [alerts reference](alerts#zoekt-indexed-search-request-errors) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100140` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Current number of zoekt scheduler processes in a state
Each ongoing search request starts its life as an interactive query. If it takes too long it becomes a batch query. Between state transitions it can be queued. If you have a high number of batch queries it is a sign there is a large load of slow queries. Alternatively your systems are underprovisioned and normal search queries are taking too long. For a full explanation of the states see https://github.com/sourcegraph/zoekt/blob/930cd1c28917e64c87f0ce354a0fd040877cbba1/shards/sched.go#L311-L340 This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100150` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Rate of zoekt scheduler process state transitions in the last 5m
Each ongoing search request starts its life as an interactive query. If it takes too long it becomes a batch query. Between state transitions it can be queued. If you have a high number of batch queries it is a sign there is a large load of slow queries. Alternatively your systems are underprovisioned and normal search queries are taking too long. For a full explanation of the states see https://github.com/sourcegraph/zoekt/blob/930cd1c28917e64c87f0ce354a0fd040877cbba1/shards/sched.go#L311-L340 This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100151` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile successful git fetch durations over 5m
Long git fetch times can be a leading indicator of saturation. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile failed git fetch durations over 5m
Long git fetch times can be a leading indicator of saturation. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Index results state count over 5m (aggregate)
This dashboard shows the outcomes of recently completed indexing jobs across all index-server instances. A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts. Legend: - fail -> the indexing jobs failed - success -> the indexing job succeeded and the index was updated - success_meta -> the indexing job succeeded, but only metadata was updated - noop -> the indexing job succeed, but we didn`t need to update anything - empty -> the indexing job succeeded, but the index was empty (i.e. the repository is empty) This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Index results state count over 5m (per instance)
This dashboard shows the outcomes of recently completed indexing jobs, split out across each index-server instance. (You can use the "instance" filter at the top of the page to select a particular instance.) A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts. Legend: - fail -> the indexing jobs failed - success -> the indexing job succeeded and the index was updated - success_meta -> the indexing job succeeded, but only metadata was updated - noop -> the indexing job succeed, but we didn`t need to update anything - empty -> the indexing job succeeded, but the index was empty (i.e. the repository is empty) This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Successful indexing durations
Latency increases can indicate bottlenecks in the indexserver. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Failed indexing durations
Failures happening after a long time indicates timeouts. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p99 duration of successful indexing jobs aggregated across all Zoekt instances. Latency increases can indicate bottlenecks in the indexserver. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100320` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p90 duration of successful indexing jobs aggregated across all Zoekt instances. Latency increases can indicate bottlenecks in the indexserver. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100321` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p75 duration of successful indexing jobs aggregated across all Zoekt instances. Latency increases can indicate bottlenecks in the indexserver. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100322` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p99 duration of successful indexing jobs broken out per Zoekt instance. Latency increases can indicate bottlenecks in the indexserver. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100330` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p90 duration of successful indexing jobs broken out per Zoekt instance. Latency increases can indicate bottlenecks in the indexserver. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100331` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p75 duration of successful indexing jobs broken out per Zoekt instance. Latency increases can indicate bottlenecks in the indexserver. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100332` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p99 duration of failed indexing jobs aggregated across all Zoekt instances. Failures happening after a long time indicates timeouts. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100340` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p90 duration of failed indexing jobs aggregated across all Zoekt instances. Failures happening after a long time indicates timeouts. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100341` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p75 duration of failed indexing jobs aggregated across all Zoekt instances. Failures happening after a long time indicates timeouts. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100342` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p99 duration of failed indexing jobs broken out per Zoekt instance. Failures happening after a long time indicates timeouts. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100350` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p90 duration of failed indexing jobs broken out per Zoekt instance. Failures happening after a long time indicates timeouts. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100351` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p75 duration of failed indexing jobs broken out per Zoekt instance. Failures happening after a long time indicates timeouts. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100352` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*# scheduled index jobs (aggregate)
A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*# scheduled index jobs (per instance)
A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Job queuing delay heatmap
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed. Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better . This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile job queuing delay over 5m (aggregate)
This dashboard shows the p99.9 job queueing delay aggregated across all Zoekt instances. The queueing delay represents the amount of time an indexing job spent in the queue before it was processed. Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better. The 99.9 percentile dashboard is useful for capturing the long tail of queueing delays (on the order of 24+ hours, etc.). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100420` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile job queueing delay over 5m (aggregate)
This dashboard shows the p90 job queueing delay aggregated across all Zoekt instances. The queueing delay represents the amount of time an indexing job spent in the queue before it was processed. Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100421` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile job queueing delay over 5m (aggregate)
This dashboard shows the p75 job queueing delay aggregated across all Zoekt instances. The queueing delay represents the amount of time an indexing job spent in the queue before it was processed. Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100422` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile job queuing delay over 5m (per instance)
This dashboard shows the p99.9 job queueing delay, broken out per Zoekt instance. The queueing delay represents the amount of time an indexing job spent in the queue before it was processed. Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better. The 99.9 percentile dashboard is useful for capturing the long tail of queueing delays (on the order of 24+ hours, etc.). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100430` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile job queueing delay over 5m (per instance)
This dashboard shows the p90 job queueing delay, broken out per Zoekt instance. The queueing delay represents the amount of time an indexing job spent in the queue before it was processed. Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100431` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile job queueing delay over 5m (per instance)
This dashboard shows the p75 job queueing delay, broken out per Zoekt instance. The queueing delay represents the amount of time an indexing job spent in the queue before it was processed. Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100432` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Process memory map areas percentage used (per instance)
Processes have a limited about of memory map areas that they can use. In Zoekt, memory map areas are mainly used for loading shards into memory for queries (via mmap). However, memory map areas are also used for loading shared libraries, etc. _See https://en.wikipedia.org/wiki/Memory-mapped_file and the related articles for more information about memory maps._ Once the memory map limit is reached, the Linux kernel will prevent the process from creating any additional memory map areas. This could cause the process to crash. Refer to the [alerts reference](alerts#zoekt-memory-map-areas-percentage-used) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*# of compound shards (aggregate)
The total number of compound shards aggregated over all instances. This number should be consistent if the number of indexed repositories doesn`t change. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*# of compound shards (per instance)
The total number of compound shards per instance. This number should be consistent if the number of indexed repositories doesn`t change. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average successful shard merging duration over 1 hour
Average duration of a successful merge over the last hour. The duration depends on the target compound shard size. The larger the compound shard the longer a merge will take. Since the target compound shard size is set on start of zoekt-indexserver, the average duration should be consistent. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average failed shard merging duration over 1 hour
Average duration of a failed merge over the last hour. This curve should be flat. Any deviation should be investigated. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Number of errors during shard merging (aggregate)
Number of errors during shard merging aggregated over all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100620` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Number of errors during shard merging (per instance)
Number of errors during shard merging per instance. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100621` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*If shard merging is running (per instance)
Set to 1 if shard merging is running. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100630` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*If vacuum is running (per instance)
Set to 1 if vacuum is running. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100631` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Transmission rate over 5m (aggregate)
The rate of bytes sent over the network across all Zoekt pods This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Transmission rate over 5m (per instance)
The amount of bytes sent over the network by individual Zoekt pods This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Receive rate over 5m (aggregate)
The amount of bytes received from the network across Zoekt pods This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Receive rate over 5m (per instance)
The amount of bytes received from the network by individual Zoekt pods This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Transmit packet drop rate over 5m (by instance)
An increase in dropped packets could be a leading indicator of network saturation. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100720` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Errors encountered while transmitting over 5m (per instance)
An increase in transmission errors could indicate a networking issue This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100721` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Receive packet drop rate over 5m (by instance)
An increase in dropped packets could be a leading indicator of network saturation. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100722` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Errors encountered while receiving over 5m (per instance)
An increase in errors while receiving could indicate a networking issue. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100723` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100820` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100821` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100822` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100830` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100831` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100832` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100840` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100841` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100842` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100850` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100860` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100902` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "zoekt_webserver" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100910` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "zoekt_webserver" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100911` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "zoekt_webserver" clients. **Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC. When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it. **Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100912` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "zoekt_webserver" clients. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "zoekt_webserver" clients, broken out per method. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "zoekt_webserver" clients, broken out per method This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101002` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Read throughput over 1m (per instance)
The amount of data that was read from the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101110` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Write throughput over 1m (per instance)
The amount of data that was written to the device per second. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101111` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101120` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101121` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average read request size over 1m (per instance)
The average size of read requests that were issued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101130` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average write request size over 1m (per instance)
The average size of write requests that were issued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101131` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101140` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device. Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101141` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz). Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101150` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod zoekt-indexserver` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p zoekt-indexserver`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' zoekt-indexserver` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the zoekt-indexserver container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs zoekt-indexserver` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#zoekt-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101201` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage by instance
Refer to the [alerts reference](alerts#zoekt-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101202` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101203` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod zoekt-webserver` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p zoekt-webserver`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' zoekt-webserver` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the zoekt-webserver container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs zoekt-webserver` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101300` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#zoekt-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101301` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage by instance
Refer to the [alerts reference](alerts#zoekt-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101302` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101303` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#zoekt-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101400` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#zoekt-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101401` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#zoekt-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101410` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#zoekt-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101411` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#zoekt-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101412` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#zoekt-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101500` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#zoekt-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101501` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#zoekt-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101510` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#zoekt-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101511` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#zoekt-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101512` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Percentage pods available
Refer to the [alerts reference](alerts#zoekt-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101600` on your Sourcegraph instance. *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*Sourcegraph's all-in-one Prometheus and Alertmanager service.
To see this dashboard, visit `/-/debug/grafana/d/prometheus/prometheus` on your Sourcegraph instance. ### Prometheus: Metrics #### prometheus: metrics_cardinalityMetrics with highest cardinalities
The 10 highest-cardinality metrics collected by this Prometheus instance. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Samples scraped by job
The number of samples scraped after metric relabeling was applied by this Prometheus instance. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Average prometheus rule group evaluation duration over 10m by rule group
A high value here indicates Prometheus rule evaluation is taking longer than expected. It might indicate that certain rule groups are taking too long to evaluate, or Prometheus is underprovisioned. Rules that Sourcegraph ships with are grouped under `/sg_config_prometheus`. [Custom rules are grouped under `/sg_prometheus_addons`](https://sourcegraph.com/docs/admin/observability/metrics#prometheus-configuration). Refer to the [alerts reference](alerts#prometheus-prometheus-rule-eval-duration) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Failed prometheus rule evaluations over 5m by rule group
Rules that Sourcegraph ships with are grouped under `/sg_config_prometheus`. [Custom rules are grouped under `/sg_prometheus_addons`](https://sourcegraph.com/docs/admin/observability/metrics#prometheus-configuration). Refer to the [alerts reference](alerts#prometheus-prometheus-rule-eval-failures) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Alertmanager notification latency over 1m by integration
Refer to the [alerts reference](alerts#prometheus-alertmanager-notification-latency) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Failed alertmanager notifications over 1m by integration
Refer to the [alerts reference](alerts#prometheus-alertmanager-notification-failures) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Prometheus configuration reload status
A `1` indicates Prometheus reloaded its configuration successfully. Refer to the [alerts reference](alerts#prometheus-prometheus-config-status) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Alertmanager configuration reload status
A `1` indicates Alertmanager reloaded its configuration successfully. Refer to the [alerts reference](alerts#prometheus-alertmanager-config-status) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Prometheus tsdb failures by operation over 1m by operation
Refer to the [alerts reference](alerts#prometheus-prometheus-tsdb-op-failure) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Prometheus scrapes that exceed the sample limit over 10m
Refer to the [alerts reference](alerts#prometheus-prometheus-target-sample-exceeded) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Prometheus scrapes rejected due to duplicate timestamps over 10m
Refer to the [alerts reference](alerts#prometheus-prometheus-target-sample-duplicate) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod prometheus` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p prometheus`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' prometheus` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the prometheus container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs prometheus` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#prometheus-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage by instance
Refer to the [alerts reference](alerts#prometheus-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#prometheus-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#prometheus-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#prometheus-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#prometheus-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#prometheus-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Percentage pods available
Refer to the [alerts reference](alerts#prometheus-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Executes jobs in an isolated environment.
To see this dashboard, visit `/-/debug/grafana/d/executor/executor` on your Sourcegraph instance. ### Executor: Executor: Executor jobs #### executor: executor_queue_sizeUnprocessed executor job queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed executor job queue longest time in queue
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed executor job dequeue cache size for multiqueue executors
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Executor active handlers
Refer to the [alerts reference](alerts#executor-executor-handlers) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Executor operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful executor operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Executor operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Executor operation error rate over 5m
Refer to the [alerts reference](alerts#executor-executor-processor-error-rate) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful client operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful client operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful client operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100403` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful client operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Client operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful command operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100503` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful command operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful command operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100603` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful command operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100612` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100613` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful command operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100702` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate command operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100703` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful command operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100712` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Command operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100713` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*CPU utilization (minus idle/iowait)
Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100800` on your Sourcegraph instance.CPU saturation (time waiting)
Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100801` on your Sourcegraph instance.Memory utilization
Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100810` on your Sourcegraph instance.Memory saturation (vmem efficiency)
Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers >~100% may be sign of imminent memory exhaustion, while sustained 0% < x < ~100% figures are very serious. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100811` on your Sourcegraph instance.Memory saturation (fully stalled)
Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100812` on your Sourcegraph instance.Disk IO utilization (percentage time spent in IO)
Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100820` on your Sourcegraph instance.Disk IO saturation (avg IO queue size)
Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100821` on your Sourcegraph instance.Disk IO saturation (avg time of all processes stalled)
Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100822` on your Sourcegraph instance.Network IO utilization (Rx)
Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100830` on your Sourcegraph instance.Network IO saturation (Rx packets dropped)
Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100831` on your Sourcegraph instance.Network IO errors (Rx)
Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100832` on your Sourcegraph instance.Network IO utilization (Tx)
Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100840` on your Sourcegraph instance.Network IO saturation (Tx packets dropped)
Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100841` on your Sourcegraph instance.Network IO errors (Tx)
Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100842` on your Sourcegraph instance.CPU utilization (minus idle/iowait)
Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100900` on your Sourcegraph instance.CPU saturation (time waiting)
Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100901` on your Sourcegraph instance.Memory utilization
Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100910` on your Sourcegraph instance.Memory saturation (vmem efficiency)
Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers >~100% may be sign of imminent memory exhaustion, while sustained 0% < x < ~100% figures are very serious. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100911` on your Sourcegraph instance.Memory saturation (fully stalled)
Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100912` on your Sourcegraph instance.Disk IO utilization (percentage time spent in IO)
Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100920` on your Sourcegraph instance.Disk IO saturation (avg IO queue size)
Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100921` on your Sourcegraph instance.Disk IO saturation (avg time of all processes stalled)
Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100922` on your Sourcegraph instance.Network IO utilization (Rx)
Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100930` on your Sourcegraph instance.Network IO saturation (Rx packets dropped)
Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100931` on your Sourcegraph instance.Network IO errors (Rx)
Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100932` on your Sourcegraph instance.Network IO utilization (Tx)
Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100940` on your Sourcegraph instance.Network IO saturation (Tx packets dropped)
Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100941` on your Sourcegraph instance.Network IO errors (Tx)
Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100942` on your Sourcegraph instance.Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#executor-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#executor-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Container usage and provisioning indicators of all services.
To see this dashboard, visit `/-/debug/grafana/d/containers/containers` on your Sourcegraph instance. ### Global Containers Resource Usage: Containers (not available on server) #### containers: container_memory_usageContainer memory usage of all services
This value indicates the memory usage of all containers. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (1m average) across all cores by instance
This value indicates the CPU usage of all containers. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage (5m maximum) of services that exceed 80% memory limit
Containers that exceed 80% memory limit. The value indicates potential underprovisioned resources. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (5m maximum) across all cores of services that exceed 80% cpu limit
Containers that exceed 80% CPU limit. The value indicates potential underprovisioned resources. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container OOMKILL events total
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100120` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100130` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*The service at `internal/codeintel/autoindexing`.
To see this dashboard, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing` on your Sourcegraph instance. ### Code Intelligence > Autoindexing: Codeintel: Autoindexing > Summary ####codeintel-autoindexing:Auto-index jobs inserted over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Auto-indexing job scheduler operation error rate over 10m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed executor job queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Unprocessed executor job queue longest time in queue
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful resolver operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate background operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful background operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate background operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate background operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100403` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Background operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful background operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Background operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Background operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100503` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100602` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100603` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100612` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100613` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100712` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100713` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100812` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100813` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100910` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100911` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100912` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100913` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*The service at internal/codeintel/codenav`.
To see this dashboard, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav` on your Sourcegraph instance. ### Code Intelligence > Code Nav: Codeintel: CodeNav > Service #### codeintel-codenav: codeintel_codenav_totalAggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful resolver operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*The service at `internal/codeintel/policies`.
To see this dashboard, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies` on your Sourcegraph instance. ### Code Intelligence > Policies: Codeintel: Policies > Service #### codeintel-policies: codeintel_policies_totalAggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful resolver operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif repository pattern matcher repositories pattern matcher every 5m
Number of configuration policies whose repository membership list was updated This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*The service at `internal/codeintel/ranking`.
To see this dashboard, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking` on your Sourcegraph instance. ### Code Intelligence > Ranking: Codeintel: Ranking > Service #### codeintel-ranking: codeintel_ranking_totalAggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records processed every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records processed every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records processed every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records processed every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100612` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100613` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100712` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100713` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100812` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100813` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100910` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100911` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100912` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100913` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*The service at `internal/codeintel/uploads`.
To see this dashboard, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads` on your Sourcegraph instance. ### Code Intelligence > Uploads: Codeintel: Uploads > Service #### codeintel-uploads: codeintel_uploads_totalAggregate service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful service operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100002` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100003` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful service operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Service operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful store operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful store operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Store operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate resolver operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful resolver operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Resolver operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate http handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate successful http handler operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate http handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Aggregate http handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Http handler operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful http handler operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Http handler operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Http handler operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Repository queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Repository queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Repository queue longest time in queue
Refer to the [alerts reference](alerts#codeintel-uploads-codeintel-commit-graph-queued-max-age) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif upload repository scan repositories scanned every 5m
Number of repositories scanned for data retention This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif upload records scan records scanned every 5m
Number of codeintel upload records scanned for data retention This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif upload commits scanned commits scanned every 5m
Number of commits reachable from a codeintel upload record scanned for data retention This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Lsif upload records expired uploads scanned every 5m
Number of codeintel upload records marked as expired This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100503` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100601` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100610` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100611` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100612` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100613` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100700` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100701` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100710` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100711` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100712` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100713` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100800` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100801` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100810` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100811` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100812` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100813` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100900` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100901` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100910` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100911` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100912` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100913` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101000` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101001` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101010` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101011` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101012` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101013` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101100` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101101` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101110` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101111` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101112` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101113` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101200` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101201` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101210` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101211` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101212` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101213` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101300` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101301` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101310` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101311` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101312` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101313` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101400` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101401` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101410` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101411` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101412` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101413` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records scanned every 5m
The number of candidate records considered for cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Records altered every 5m
The number of candidate records altered as part of cleanup. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101510` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*99th percentile successful job invocation operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101511` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101512` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Job invocation operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101513` on your Sourcegraph instance. *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*Monitoring telemetry services in Sourcegraph.
To see this dashboard, visit `/-/debug/grafana/d/telemetry/telemetry` on your Sourcegraph instance. ### Telemetry: Telemetry Gateway Exporter: Export and queue metrics #### telemetry: telemetry_gateway_exporter_queue_sizeTelemetry event payloads pending export
The number of events queued to be exported. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Rate of growth of export queue over 30m
A positive value indicates the queue is growing. Refer to the [alerts reference](alerts#telemetry-telemetry-gateway-exporter-queue-growth) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Events exported from queue per hour
The number of events being exported. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100010` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Number of events exported per batch over 30m
The number of events exported in each batch. The largest bucket is the maximum number of events exported per batch. If the distribution trends to the maximum bucket, then events export throughput is at or approaching saturation - try increasing `TELEMETRY_GATEWAY_EXPORTER_EXPORT_BATCH_SIZE` or decreasing `TELEMETRY_GATEWAY_EXPORTER_EXPORT_INTERVAL`. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100011` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Events exporter operations every 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Aggregate successful events exporter operation duration distribution over 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Events exporter operation errors every 30m
Refer to the [alerts reference](alerts#telemetry-telemetrygatewayexporter-exporter-errors-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100102` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Events exporter operation error rate over 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100103` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Export queue cleanup operations every 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Aggregate successful export queue cleanup operation duration distribution over 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Export queue cleanup operation errors every 30m
Refer to the [alerts reference](alerts#telemetry-telemetrygatewayexporter-queue-cleanup-errors-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Export queue cleanup operation error rate over 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Export backlog metrics reporting operations every 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Aggregate successful export backlog metrics reporting operation duration distribution over 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Export backlog metrics reporting operation errors every 30m
Refer to the [alerts reference](alerts#telemetry-telemetrygatewayexporter-queue-metrics-reporter-errors-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100302` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Export backlog metrics reporting operation error rate over 30m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100303` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Aggregate usage data exporter operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Aggregate successful usage data exporter operation duration distribution over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Aggregate usage data exporter operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Aggregate usage data exporter operation error rate over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100403` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Usage data exporter operations every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100410` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*99th percentile successful usage data exporter operation duration over 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100411` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Usage data exporter operation errors every 5m
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100412` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Usage data exporter operation error rate over 5m
Refer to the [alerts reference](alerts#telemetry-telemetry-job-error-rate) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100413` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*Event level usage data queue size
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Event level usage data queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs. - A value < than 1 indicates that process rate > enqueue rate - A value = than 1 indicates that process rate = enqueue rate - A value > than 1 indicates that process rate < enqueue rate This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*Utilized percentage of maximum throughput
Refer to the [alerts reference](alerts#telemetry-telemetry-job-utilized-throughput) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*The OpenTelemetry collector ingests OpenTelemetry data from Sourcegraph and exports it to the configured backends.
To see this dashboard, visit `/-/debug/grafana/d/otel-collector/otel-collector` on your Sourcegraph instance. ### OpenTelemetry Collector: Receivers #### otel-collector: otel_span_receive_rateSpans received per receiver per minute
Shows the rate of spans accepted by the configured reveiver A Trace is a collection of spans and a span represents a unit of work or operation. Spans are the building blocks of Traces. The spans have only been accepted by the receiver, which means they still have to move through the configured pipeline to be exported. For more information on tracing and configuration of a OpenTelemetry receiver see https://opentelemetry.io/docs/collector/configuration/#receivers. See the Exporters section see spans that have made it through the pipeline and are exported. Depending the configured processors, received spans might be dropped and not exported. For more information on configuring processors see https://opentelemetry.io/docs/collector/configuration/#processors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Spans refused per receiver
Refer to the [alerts reference](alerts#otel-collector-otel-span-refused) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Spans exported per exporter per minute
Shows the rate of spans being sent by the exporter A Trace is a collection of spans. A Span represents a unit of work or operation. Spans are the building blocks of Traces. The rate of spans here indicates spans that have made it through the configured pipeline and have been sent to the configured export destination. For more information on configuring a exporter for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#exporters. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Span export failures by exporter
Shows the rate of spans failed to be sent by the configured reveiver. A number higher than 0 for a long period can indicate a problem with the exporter configuration or with the service that is being exported too For more information on configuring a exporter for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#exporters. Refer to the [alerts reference](alerts#otel-collector-otel-span-export-failures) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Exporter queue capacity
Shows the the capacity of the retry queue (in batches). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Exporter queue size
Shows the current size of retry queue This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Exporter enqueue failed spans
Shows the rate of spans failed to be enqueued by the configured exporter. A number higher than 0 for a long period can indicate a problem with the exporter configuration Refer to the [alerts reference](alerts#otel-collector-otelcol-exporter-enqueue-failed-spans) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Spans dropped per processor per minute
Shows the rate of spans dropped by the configured processor Refer to the [alerts reference](alerts#otel-collector-otelcol-processor-dropped-spans) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Cpu usage of the collector
Shows CPU usage as reported by the OpenTelemetry collector. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Memory allocated to the otel collector
Shows the allocated memory Resident Set Size (RSS) as reported by the OpenTelemetry collector. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Memory used by the collector
Shows how much memory is being used by the otel collector. * High memory usage might indicate thad the configured pipeline is keeping a lot of spans in memory for processing * Spans failing to be sent and the exporter is configured to retry * A high batch count by using a batch processor For more information on configuring processors for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#processors. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100402` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod otel-collector` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p otel-collector`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' otel-collector` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the otel-collector container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs otel-collector` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#otel-collector-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100501` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Container memory usage by instance
Refer to the [alerts reference](alerts#otel-collector-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100502` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100503` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Percentage pods available
Refer to the [alerts reference](alerts#otel-collector-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100600` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Handles embeddings searches.
To see this dashboard, visit `/-/debug/grafana/d/embeddings/embeddings` on your Sourcegraph instance. ### Embeddings: Site configuration client update latency #### embeddings: embeddings_site_configuration_duration_since_last_successful_update_by_instanceDuration since last successful site configuration update (by instance)
The duration since the configuration client used by the "embeddings" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100000` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum duration since last successful site configuration update (all "embeddings" instances)
Refer to the [alerts reference](alerts#embeddings-embeddings-site-configuration-duration-since-last-successful-update-by-instance) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100001` on your Sourcegraph instance. *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*Maximum open
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100100` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Established
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100101` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Used
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100110` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Idle
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100111` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Mean blocked seconds per conn request
Refer to the [alerts reference](alerts#embeddings-mean-blocked-seconds-per-conn-request) for 2 alerts related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100120` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Closed by SetMaxIdleConns
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100130` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Closed by SetConnMaxLifetime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100131` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Closed by SetConnMaxIdleTime
This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100132` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons. - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod embeddings` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p embeddings`. - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' embeddings` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the embeddings container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs embeddings` (note this will include logs from the previous and currently running container). This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100200` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Container cpu usage total (1m average) across all cores by instance
Refer to the [alerts reference](alerts#embeddings-container-cpu-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100201` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Container memory usage by instance
Refer to the [alerts reference](alerts#embeddings-container-memory-usage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100202` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100203` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the [alerts reference](alerts#embeddings-provisioning-container-cpu-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100300` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Container memory usage (1d maximum) by instance
Refer to the [alerts reference](alerts#embeddings-provisioning-container-memory-usage-long-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100301` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Container cpu usage total (5m maximum) across all cores by instance
Refer to the [alerts reference](alerts#embeddings-provisioning-container-cpu-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100310` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Container memory usage (5m maximum) by instance
Refer to the [alerts reference](alerts#embeddings-provisioning-container-memory-usage-short-term) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100311` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning. Refer to the [alerts reference](alerts#embeddings-container-oomkill-events-total) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100312` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Maximum active goroutines
A high value here indicates a possible goroutine leak. Refer to the [alerts reference](alerts#embeddings-go-goroutines) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100400` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Maximum go garbage collection duration
Refer to the [alerts reference](alerts#embeddings-go-gc-duration-seconds) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100401` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Percentage pods available
Refer to the [alerts reference](alerts#embeddings-pods-available-percentage) for 1 alert related to this panel. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100500` on your Sourcegraph instance. *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*Hit ratio of the embeddings cache
A low hit rate indicates your cache is not well utilized. Consider increasing the cache size. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100600` on your Sourcegraph instance.Bytes fetched due to a cache miss
A high volume of misses indicates that the many searches are not hitting the cache. Consider increasing the cache size. This panel has no related alerts. To see this panel, visit `/-/debug/grafana/d/embeddings/embeddings?viewPanel=100601` on your Sourcegraph instance.99th percentile successful search request duration over 5m
**Descriptions** - warning frontend: 20s+ 99th percentile successful search request duration over 5m **Next steps** - **Get details on the exact queries that are slow** by configuring `"observability.logSlowSearches": 20,` in the site configuration and looking for `frontend` warning logs prefixed with `slow search request` for additional details. - **Check that most repositories are indexed** by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.) - **Kubernetes:** Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the `indexed-search.Deployment.yaml` if regularly hitting max CPU utilization. - **Docker Compose:** Check CPU usage on the Zoekt Web Server dashboard, consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml` if regularly hitting max CPU utilization. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-99th-percentile-search-request-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_99th_percentile_search_request_duration" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful search request duration over 5m
**Descriptions** - warning frontend: 15s+ 90th percentile successful search request duration over 5m **Next steps** - **Get details on the exact queries that are slow** by configuring `"observability.logSlowSearches": 15,` in the site configuration and looking for `frontend` warning logs prefixed with `slow search request` for additional details. - **Check that most repositories are indexed** by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.) - **Kubernetes:** Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the `indexed-search.Deployment.yaml` if regularly hitting max CPU utilization. - **Docker Compose:** Check CPU usage on the Zoekt Web Server dashboard, consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml` if regularly hitting max CPU utilization. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-90th-percentile-search-request-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_90th_percentile_search_request_duration" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*hard timeout search responses every 5m
**Descriptions** - warning frontend: 2%+ hard timeout search responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-hard-timeout-search-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_hard_timeout_search_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*hard error search responses every 5m
**Descriptions** - warning frontend: 2%+ hard error search responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-hard-error-search-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_hard_error_search_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*partial timeout search responses every 5m
**Descriptions** - warning frontend: 5%+ partial timeout search responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-partial-timeout-search-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_partial_timeout_search_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*search alert user suggestions shown every 5m
**Descriptions** - warning frontend: 5%+ search alert user suggestions shown every 5m for 15m0s **Next steps** - This indicates your user`s are making syntax errors or similar user errors. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-search-alert-user-suggestions). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_search_alert_user_suggestions" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile page load latency over all routes over 10m
**Descriptions** - warning frontend: 2s+ 90th percentile page load latency over all routes over 10m **Next steps** - Confirm that the Sourcegraph frontend has enough CPU/memory using the provisioning panels. - Investigate potential sources of latency by selecting Explore and modifying the `sum by(le)` section to include additional labels: for example, `sum by(le, job)` or `sum by (le, instance)`. - Trace a request to see what the slowest part is: https://sourcegraph.com/docs/admin/observability/tracing - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-page-load-latency). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_page_load_latency" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*99th percentile code-intel successful search request duration over 5m
**Descriptions** - warning frontend: 20s+ 99th percentile code-intel successful search request duration over 5m **Next steps** - **Get details on the exact queries that are slow** by configuring `"observability.logSlowSearches": 20,` in the site configuration and looking for `frontend` warning logs prefixed with `slow search request` for additional details. - **Check that most repositories are indexed** by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.) - **Kubernetes:** Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the `indexed-search.Deployment.yaml` if regularly hitting max CPU utilization. - **Docker Compose:** Check CPU usage on the Zoekt Web Server dashboard, consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml` if regularly hitting max CPU utilization. - This alert may indicate that your instance is struggling to process symbols queries on a monorepo, [learn more here](../how-to/monorepo-issues). - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-99th-percentile-search-codeintel-request-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_99th_percentile_search_codeintel_request_duration" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile code-intel successful search request duration over 5m
**Descriptions** - warning frontend: 15s+ 90th percentile code-intel successful search request duration over 5m **Next steps** - **Get details on the exact queries that are slow** by configuring `"observability.logSlowSearches": 15,` in the site configuration and looking for `frontend` warning logs prefixed with `slow search request` for additional details. - **Check that most repositories are indexed** by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.) - **Kubernetes:** Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the `indexed-search.Deployment.yaml` if regularly hitting max CPU utilization. - **Docker Compose:** Check CPU usage on the Zoekt Web Server dashboard, consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml` if regularly hitting max CPU utilization. - This alert may indicate that your instance is struggling to process symbols queries on a monorepo, [learn more here](../how-to/monorepo-issues). - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-90th-percentile-search-codeintel-request-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_90th_percentile_search_codeintel_request_duration" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*hard timeout search code-intel responses every 5m
**Descriptions** - warning frontend: 2%+ hard timeout search code-intel responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-hard-timeout-search-codeintel-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_hard_timeout_search_codeintel_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*hard error search code-intel responses every 5m
**Descriptions** - warning frontend: 2%+ hard error search code-intel responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-hard-error-search-codeintel-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_hard_error_search_codeintel_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*partial timeout search code-intel responses every 5m
**Descriptions** - warning frontend: 5%+ partial timeout search code-intel responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-partial-timeout-search-codeintel-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_partial_timeout_search_codeintel_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*search code-intel alert user suggestions shown every 5m
**Descriptions** - warning frontend: 5%+ search code-intel alert user suggestions shown every 5m for 15m0s **Next steps** - This indicates a bug in Sourcegraph, please [open an issue](https://github.com/sourcegraph/sourcegraph/issues/new/choose). - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-search-codeintel-alert-user-suggestions). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_search_codeintel_alert_user_suggestions" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful search API request duration over 5m
**Descriptions** - warning frontend: 50s+ 99th percentile successful search API request duration over 5m **Next steps** - **Get details on the exact queries that are slow** by configuring `"observability.logSlowSearches": 20,` in the site configuration and looking for `frontend` warning logs prefixed with `slow search request` for additional details. - **Check that most repositories are indexed** by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.) - **Kubernetes:** Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the `indexed-search.Deployment.yaml` if regularly hitting max CPU utilization. - **Docker Compose:** Check CPU usage on the Zoekt Web Server dashboard, consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml` if regularly hitting max CPU utilization. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-99th-percentile-search-api-request-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_99th_percentile_search_api_request_duration" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful search API request duration over 5m
**Descriptions** - warning frontend: 40s+ 90th percentile successful search API request duration over 5m **Next steps** - **Get details on the exact queries that are slow** by configuring `"observability.logSlowSearches": 15,` in the site configuration and looking for `frontend` warning logs prefixed with `slow search request` for additional details. - **Check that most repositories are indexed** by visiting https://sourcegraph.example.com/site-admin/repositories?filter=needs-index (it should show few or no results.) - **Kubernetes:** Check CPU usage of zoekt-webserver in the indexed-search pod, consider increasing CPU limits in the `indexed-search.Deployment.yaml` if regularly hitting max CPU utilization. - **Docker Compose:** Check CPU usage on the Zoekt Web Server dashboard, consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml` if regularly hitting max CPU utilization. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-90th-percentile-search-api-request-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_90th_percentile_search_api_request_duration" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*hard error search API responses every 5m
**Descriptions** - warning frontend: 2%+ hard error search API responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-hard-error-search-api-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_hard_error_search_api_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*partial timeout search API responses every 5m
**Descriptions** - warning frontend: 5%+ partial timeout search API responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-partial-timeout-search-api-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_partial_timeout_search_api_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*search API alert user suggestions shown every 5m
**Descriptions** - warning frontend: 5%+ search API alert user suggestions shown every 5m **Next steps** - This indicates your user`s search API requests have syntax errors or a similar user error. Check the responses the API sends back for an explanation. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-search-api-alert-user-suggestions). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_search_api_alert_user_suggestions" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*maximum duration since last successful site configuration update (all "frontend" instances)
**Descriptions** - critical frontend: 300s+ maximum duration since last successful site configuration update (all "frontend" instances) **Next steps** - This indicates that one or more "frontend" instances have not successfully updated the site configuration in over 5 minutes. This could be due to networking issues between services or problems with the site configuration service itself. - Check for relevant errors in the "frontend" logs, as well as frontend`s logs. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-frontend-site-configuration-duration-since-last-successful-update-by-instance). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_frontend_frontend_site_configuration_duration_since_last_successful_update_by_instance" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*internal indexed search error responses every 5m
**Descriptions** - warning frontend: 5%+ internal indexed search error responses every 5m for 15m0s **Next steps** - Check the Zoekt Web Server dashboard for indications it might be unhealthy. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-internal-indexed-search-error-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_internal_indexed_search_error_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*internal unindexed search error responses every 5m
**Descriptions** - warning frontend: 5%+ internal unindexed search error responses every 5m for 15m0s **Next steps** - Check the Searcher dashboard for indications it might be unhealthy. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-internal-unindexed-search-error-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_internal_unindexed_search_error_responses" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*99th percentile successful gitserver query duration over 5m
**Descriptions** - warning frontend: 20s+ 99th percentile successful gitserver query duration over 5m **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-99th-percentile-gitserver-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_99th_percentile_gitserver_duration" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*gitserver error responses every 5m
**Descriptions** - warning frontend: 5%+ gitserver error responses every 5m for 15m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-gitserver-error-responses). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_gitserver_error_responses" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*warning test alert metric
**Descriptions** - warning frontend: 1+ warning test alert metric **Next steps** - This alert is triggered via the `triggerObservabilityTestAlert` GraphQL endpoint, and will automatically resolve itself. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-observability-test-alert-warning). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_observability_test_alert_warning" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*critical test alert metric
**Descriptions** - critical frontend: 1+ critical test alert metric **Next steps** - This alert is triggered via the `triggerObservabilityTestAlert` GraphQL endpoint, and will automatically resolve itself. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-observability-test-alert-critical). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_frontend_observability_test_alert_critical" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*cryptographic requests to Cloud KMS every 1m
**Descriptions** - warning frontend: 15000+ cryptographic requests to Cloud KMS every 1m for 5m0s - critical frontend: 30000+ cryptographic requests to Cloud KMS every 1m for 5m0s **Next steps** - Revert recent commits that cause extensive listing from "external_services" and/or "user_external_accounts" tables. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-cloudkms-cryptographic-requests). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_cloudkms_cryptographic_requests", "critical_frontend_cloudkms_cryptographic_requests" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*mean blocked seconds per conn request
**Descriptions** - warning frontend: 0.1s+ mean blocked seconds per conn request for 10m0s - critical frontend: 0.5s+ mean blocked seconds per conn request for 10m0s **Next steps** - Increase SRC_PGSQL_MAX_OPEN together with giving more memory to the database if needed - Scale up Postgres memory/cpus - [see our scaling guide](https://sourcegraph.com/docs/admin/config/postgres-conf) - If using GCP Cloud SQL, check for high lock waits or CPU usage in query insights - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-mean-blocked-seconds-per-conn-request). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_mean_blocked_seconds_per_conn_request", "critical_frontend_mean_blocked_seconds_per_conn_request" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning frontend: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the (frontend|sourcegraph-frontend) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage by instance
**Descriptions** - warning frontend: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of (frontend|sourcegraph-frontend) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_container_memory_usage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning frontend: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the (frontend|sourcegraph-frontend) service. - **Docker Compose:** Consider increasing `cpus:` of the (frontend|sourcegraph-frontend) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (1d maximum) by instance
**Descriptions** - warning frontend: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the (frontend|sourcegraph-frontend) service. - **Docker Compose:** Consider increasing `memory:` of the (frontend|sourcegraph-frontend) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning frontend: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the (frontend|sourcegraph-frontend) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (5m maximum) by instance
**Descriptions** - warning frontend: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of (frontend|sourcegraph-frontend) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container OOMKILL events total by instance
**Descriptions** - warning frontend: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of (frontend|sourcegraph-frontend) container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#frontend-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*maximum active goroutines
**Descriptions** - warning frontend: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#frontend-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_go_goroutines" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*maximum go garbage collection duration
**Descriptions** - warning frontend: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*percentage pods available
**Descriptions** - critical frontend: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod (frontend\|sourcegraph-frontend)` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p (frontend\|sourcegraph-frontend)`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_frontend_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*email delivery failure rate over 30 minutes
**Descriptions** - warning frontend: 0%+ email delivery failure rate over 30 minutes - critical frontend: 10%+ email delivery failure rate over 30 minutes **Next steps** - Check your SMTP configuration in site configuration. - Check `sourcegraph-frontend` logs for more detailed error messages. - Check your SMTP provider for more detailed error messages. - Use `sum(increase(src_email_send{success="false"}[30m]))` to check the raw count of delivery failures. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#frontend-email-delivery-failures). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_email_delivery_failures", "critical_frontend_email_delivery_failures" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*mean successful sentinel search duration over 2h
**Descriptions** - warning frontend: 5s+ mean successful sentinel search duration over 2h for 15m0s - critical frontend: 8s+ mean successful sentinel search duration over 2h for 30m0s **Next steps** - Look at the breakdown by query to determine if a specific query type is being affected - Check for high CPU usage on zoekt-webserver - Check Honeycomb for unusual activity - More help interpreting this metric is available in the [dashboards reference](dashboards#frontend-mean-successful-sentinel-duration-over-2h). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_mean_successful_sentinel_duration_over_2h", "critical_frontend_mean_successful_sentinel_duration_over_2h" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*mean successful sentinel stream latency over 2h
**Descriptions** - warning frontend: 2s+ mean successful sentinel stream latency over 2h for 15m0s - critical frontend: 3s+ mean successful sentinel stream latency over 2h for 30m0s **Next steps** - Look at the breakdown by query to determine if a specific query type is being affected - Check for high CPU usage on zoekt-webserver - Check Honeycomb for unusual activity - More help interpreting this metric is available in the [dashboards reference](dashboards#frontend-mean-sentinel-stream-latency-over-2h). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_mean_sentinel_stream_latency_over_2h", "critical_frontend_mean_sentinel_stream_latency_over_2h" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful sentinel search duration over 2h
**Descriptions** - warning frontend: 5s+ 90th percentile successful sentinel search duration over 2h for 15m0s - critical frontend: 10s+ 90th percentile successful sentinel search duration over 2h for 3h30m0s **Next steps** - Look at the breakdown by query to determine if a specific query type is being affected - Check for high CPU usage on zoekt-webserver - Check Honeycomb for unusual activity - More help interpreting this metric is available in the [dashboards reference](dashboards#frontend-90th-percentile-successful-sentinel-duration-over-2h). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_90th_percentile_successful_sentinel_duration_over_2h", "critical_frontend_90th_percentile_successful_sentinel_duration_over_2h" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*90th percentile successful sentinel stream latency over 2h
**Descriptions** - warning frontend: 4s+ 90th percentile successful sentinel stream latency over 2h for 15m0s - critical frontend: 6s+ 90th percentile successful sentinel stream latency over 2h for 3h30m0s **Next steps** - Look at the breakdown by query to determine if a specific query type is being affected - Check for high CPU usage on zoekt-webserver - Check Honeycomb for unusual activity - More help interpreting this metric is available in the [dashboards reference](dashboards#frontend-90th-percentile-sentinel-stream-latency-over-2h). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_frontend_90th_percentile_sentinel_stream_latency_over_2h", "critical_frontend_90th_percentile_sentinel_stream_latency_over_2h" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*container CPU throttling time %
**Descriptions** - warning gitserver: 75%+ container CPU throttling time % for 2m0s - critical gitserver: 90%+ container CPU throttling time % for 5m0s **Next steps** - - Consider increasing the CPU limit for the container. - More help interpreting this metric is available in the [dashboards reference](dashboards#gitserver-cpu-throttling-time). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_cpu_throttling_time", "critical_gitserver_cpu_throttling_time" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*disk space remaining
**Descriptions** - warning gitserver: less than 15% disk space remaining - critical gitserver: less than 10% disk space remaining for 10m0s **Next steps** - On a warning alert, you may want to provision more disk space: Disk pressure may result in decreased performance, users having to wait for repositories to clone, etc. - On a critical alert, you need to provision more disk space. Running out of disk space will result in decreased performance, or complete service outage. - More help interpreting this metric is available in the [dashboards reference](dashboards#gitserver-disk-space-remaining). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_disk_space_remaining", "critical_gitserver_disk_space_remaining" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*git commands running on each gitserver instance
**Descriptions** - warning gitserver: 50+ git commands running on each gitserver instance for 2m0s - critical gitserver: 100+ git commands running on each gitserver instance for 5m0s **Next steps** - **Check if the problem may be an intermittent and temporary peak** using the "Container monitoring" section at the bottom of the Git Server dashboard. - **Single container deployments:** Consider upgrading to a [Docker Compose deployment](../deploy/docker-compose/migrate) which offers better scalability and resource isolation. - **Kubernetes and Docker Compose:** Check that you are running a similar number of git server replicas and that their CPU/memory limits are allocated according to what is shown in the [Sourcegraph resource estimator](../deploy/resource_estimator). - More help interpreting this metric is available in the [dashboards reference](dashboards#gitserver-running-git-commands). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_running_git_commands", "critical_gitserver_running_git_commands" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*echo test command duration
**Descriptions** - warning gitserver: 0.02s+ echo test command duration for 30s - critical gitserver: 1s+ echo test command duration for 1m0s **Next steps** - **Single container deployments:** Upgrade to a [Docker Compose deployment](../deploy/docker-compose/migrate) which offers better scalability and resource isolation. - **Kubernetes and Docker Compose:** Check that you are running a similar number of git server replicas and that their CPU/memory limits are allocated according to what is shown in the [Sourcegraph resource estimator](../deploy/resource_estimator). - If your persistent volume is slow, you may want to provision more IOPS, usually by increasing the volume size. - More help interpreting this metric is available in the [dashboards reference](dashboards#gitserver-echo-command-duration-test). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_echo_command_duration_test", "critical_gitserver_echo_command_duration_test" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*number of times a repo corruption has been identified
**Descriptions** - critical gitserver: 0+ number of times a repo corruption has been identified **Next steps** - Check the corruption logs for details. gitserver_repos.corruption_logs contains more information. - More help interpreting this metric is available in the [dashboards reference](dashboards#gitserver-repo-corrupted). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_gitserver_repo_corrupted" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repository clone queue size
**Descriptions** - warning gitserver: 25+ repository clone queue size **Next steps** - **If you just added several repositories**, the warning may be expected. - **Check which repositories need cloning**, by visiting e.g. https://sourcegraph.example.com/site-admin/repositories?filter=not-cloned - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-repository-clone-queue-size). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_repository_clone_queue_size" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*maximum duration since last successful site configuration update (all "gitserver" instances)
**Descriptions** - critical gitserver: 300s+ maximum duration since last successful site configuration update (all "gitserver" instances) **Next steps** - This indicates that one or more "gitserver" instances have not successfully updated the site configuration in over 5 minutes. This could be due to networking issues between services or problems with the site configuration service itself. - Check for relevant errors in the "gitserver" logs, as well as frontend`s logs. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-gitserver-site-configuration-duration-since-last-successful-update-by-instance). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_gitserver_gitserver_site_configuration_duration_since_last_successful_update_by_instance" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*mean blocked seconds per conn request
**Descriptions** - warning gitserver: 0.1s+ mean blocked seconds per conn request for 10m0s - critical gitserver: 0.5s+ mean blocked seconds per conn request for 10m0s **Next steps** - Increase SRC_PGSQL_MAX_OPEN together with giving more memory to the database if needed - Scale up Postgres memory/cpus - [see our scaling guide](https://sourcegraph.com/docs/admin/config/postgres-conf) - If using GCP Cloud SQL, check for high lock waits or CPU usage in query insights - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-mean-blocked-seconds-per-conn-request). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_mean_blocked_seconds_per_conn_request", "critical_gitserver_mean_blocked_seconds_per_conn_request" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning gitserver: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the gitserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container memory usage by instance
**Descriptions** - warning gitserver: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of gitserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_container_memory_usage" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning gitserver: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the gitserver service. - **Docker Compose:** Consider increasing `cpus:` of the gitserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning gitserver: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the gitserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container OOMKILL events total by instance
**Descriptions** - warning gitserver: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of gitserver container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#gitserver-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*maximum active goroutines
**Descriptions** - warning gitserver: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#gitserver-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_go_goroutines" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*maximum go garbage collection duration
**Descriptions** - warning gitserver: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_gitserver_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*percentage pods available
**Descriptions** - critical gitserver: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod gitserver` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p gitserver`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#gitserver-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_gitserver_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*active connections
**Descriptions** - warning postgres: less than 5 active connections for 5m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#postgres-connections). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_connections" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*connection in use
**Descriptions** - warning postgres: 80%+ connection in use for 5m0s - critical postgres: 100%+ connection in use for 5m0s **Next steps** - Consider increasing [max_connections](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONS) of the database instance, [learn more](https://sourcegraph.com/docs/admin/config/postgres-conf) - Learn more about the related dashboard panel in the [dashboards reference](dashboards#postgres-usage-connections-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_usage_connections_percentage", "critical_postgres_usage_connections_percentage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*maximum transaction durations
**Descriptions** - warning postgres: 0.3s+ maximum transaction durations for 5m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#postgres-transaction-durations). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_transaction_durations" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*database availability
**Descriptions** - critical postgres: less than 0 database availability for 5m0s **Next steps** - **Kubernetes:** - Determine if the pod was OOM killed using `kubectl describe pod (pgsql\|codeintel-db\|codeinsights)` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p (pgsql\|codeintel-db\|codeinsights)`. - Check if there is any OOMKILL event using the provisioning panels - Check kernel logs using `dmesg` for OOMKILL events on worker nodes - **Docker Compose:** - Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' (pgsql\|codeintel-db\|codeinsights)` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the (pgsql|codeintel-db|codeinsights) container in `docker-compose.yml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs (pgsql\|codeintel-db\|codeinsights)` (note this will include logs from the previous and currently running container). - Check if there is any OOMKILL event using the provisioning panels - Check kernel logs using `dmesg` for OOMKILL events - More help interpreting this metric is available in the [dashboards reference](dashboards#postgres-postgres-up). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_postgres_postgres_up" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*invalid indexes (unusable by the query planner)
**Descriptions** - critical postgres: 1+ invalid indexes (unusable by the query planner) **Next steps** - Drop and re-create the invalid trigger - please contact Sourcegraph to supply the trigger definition. - More help interpreting this metric is available in the [dashboards reference](dashboards#postgres-invalid-indexes). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_postgres_invalid_indexes" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*errors scraping postgres exporter
**Descriptions** - warning postgres: 1+ errors scraping postgres exporter for 5m0s **Next steps** - Ensure the Postgres exporter can access the Postgres database. Also, check the Postgres exporter logs for errors. - More help interpreting this metric is available in the [dashboards reference](dashboards#postgres-pg-exporter-err). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_pg_exporter_err" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*active schema migration
**Descriptions** - critical postgres: 1+ active schema migration for 5m0s **Next steps** - The database migration has been in progress for 5 or more minutes - please contact Sourcegraph if this persists. - More help interpreting this metric is available in the [dashboards reference](dashboards#postgres-migration-in-progress). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_postgres_migration_in_progress" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning postgres: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the (pgsql|codeintel-db|codeinsights) service. - **Docker Compose:** Consider increasing `cpus:` of the (pgsql|codeintel-db|codeinsights) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#postgres-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (1d maximum) by instance
**Descriptions** - warning postgres: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the (pgsql|codeintel-db|codeinsights) service. - **Docker Compose:** Consider increasing `memory:` of the (pgsql|codeintel-db|codeinsights) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#postgres-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning postgres: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the (pgsql|codeintel-db|codeinsights) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#postgres-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (5m maximum) by instance
**Descriptions** - warning postgres: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of (pgsql|codeintel-db|codeinsights) container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#postgres-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container OOMKILL events total by instance
**Descriptions** - warning postgres: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of (pgsql|codeintel-db|codeinsights) container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#postgres-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_postgres_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*percentage pods available
**Descriptions** - critical postgres: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod (pgsql\|codeintel-db\|codeinsights)` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p (pgsql\|codeintel-db\|codeinsights)`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#postgres-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_postgres_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*unprocessed upload record queue longest time in queue
**Descriptions** - warning precise-code-intel-worker: 18000s+ unprocessed upload record queue longest time in queue **Next steps** - An alert here could be indicative of a few things: an upload surfacing a pathological performance characteristic, precise-code-intel-worker being underprovisioned for the required upload processing throughput, or a higher replica count being required for the volume of uploads. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-codeintel-upload-queued-max-age). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_codeintel_upload_queued_max_age" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*mean blocked seconds per conn request
**Descriptions** - warning precise-code-intel-worker: 0.1s+ mean blocked seconds per conn request for 10m0s - critical precise-code-intel-worker: 0.5s+ mean blocked seconds per conn request for 10m0s **Next steps** - Increase SRC_PGSQL_MAX_OPEN together with giving more memory to the database if needed - Scale up Postgres memory/cpus - [see our scaling guide](https://sourcegraph.com/docs/admin/config/postgres-conf) - If using GCP Cloud SQL, check for high lock waits or CPU usage in query insights - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-mean-blocked-seconds-per-conn-request). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_mean_blocked_seconds_per_conn_request", "critical_precise-code-intel-worker_mean_blocked_seconds_per_conn_request" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning precise-code-intel-worker: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the precise-code-intel-worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage by instance
**Descriptions** - warning precise-code-intel-worker: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of precise-code-intel-worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_container_memory_usage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning precise-code-intel-worker: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the precise-code-intel-worker service. - **Docker Compose:** Consider increasing `cpus:` of the precise-code-intel-worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage (1d maximum) by instance
**Descriptions** - warning precise-code-intel-worker: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the precise-code-intel-worker service. - **Docker Compose:** Consider increasing `memory:` of the precise-code-intel-worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning precise-code-intel-worker: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the precise-code-intel-worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage (5m maximum) by instance
**Descriptions** - warning precise-code-intel-worker: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of precise-code-intel-worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container OOMKILL events total by instance
**Descriptions** - warning precise-code-intel-worker: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of precise-code-intel-worker container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#precise-code-intel-worker-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum active goroutines
**Descriptions** - warning precise-code-intel-worker: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#precise-code-intel-worker-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_go_goroutines" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum go garbage collection duration
**Descriptions** - warning precise-code-intel-worker: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_precise-code-intel-worker_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*percentage pods available
**Descriptions** - critical precise-code-intel-worker: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod precise-code-intel-worker` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p precise-code-intel-worker`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#precise-code-intel-worker-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_precise-code-intel-worker_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*redis-store availability
**Descriptions** - critical redis: less than 1 redis-store availability for 10s **Next steps** - Ensure redis-store is running - More help interpreting this metric is available in the [dashboards reference](dashboards#redis-redis-store-up). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_redis_redis-store_up" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*redis-cache availability
**Descriptions** - critical redis: less than 1 redis-cache availability for 10s **Next steps** - Ensure redis-cache is running - More help interpreting this metric is available in the [dashboards reference](dashboards#redis-redis-cache-up). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_redis_redis-cache_up" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning redis: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the redis-cache service. - **Docker Compose:** Consider increasing `cpus:` of the redis-cache container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (1d maximum) by instance
**Descriptions** - warning redis: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the redis-cache service. - **Docker Compose:** Consider increasing `memory:` of the redis-cache container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning redis: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the redis-cache container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (5m maximum) by instance
**Descriptions** - warning redis: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of redis-cache container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container OOMKILL events total by instance
**Descriptions** - warning redis: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of redis-cache container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#redis-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning redis: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the redis-store service. - **Docker Compose:** Consider increasing `cpus:` of the redis-store container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (1d maximum) by instance
**Descriptions** - warning redis: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the redis-store service. - **Docker Compose:** Consider increasing `memory:` of the redis-store container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning redis: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the redis-store container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (5m maximum) by instance
**Descriptions** - warning redis: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of redis-store container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container OOMKILL events total by instance
**Descriptions** - warning redis: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of redis-store container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#redis-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_redis_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*percentage pods available
**Descriptions** - critical redis: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod redis-cache` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p redis-cache`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_redis_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*percentage pods available
**Descriptions** - critical redis: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod redis-store` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p redis-store`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#redis-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_redis_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*number of worker instances running the codeintel-upload-janitor job
**Descriptions** - warning worker: less than 1 number of worker instances running the codeintel-upload-janitor job for 1m0s - critical worker: less than 1 number of worker instances running the codeintel-upload-janitor job for 5m0s **Next steps** - Ensure your instance defines a worker container such that: - `WORKER_JOB_ALLOWLIST` contains "codeintel-upload-janitor" (or "all"), and - `WORKER_JOB_BLOCKLIST` does not contain "codeintel-upload-janitor" - Ensure that such a container is not failing to start or stay active - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-worker-job-codeintel-upload-janitor-count). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_worker_job_codeintel-upload-janitor_count", "critical_worker_worker_job_codeintel-upload-janitor_count" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*number of worker instances running the codeintel-commitgraph-updater job
**Descriptions** - warning worker: less than 1 number of worker instances running the codeintel-commitgraph-updater job for 1m0s - critical worker: less than 1 number of worker instances running the codeintel-commitgraph-updater job for 5m0s **Next steps** - Ensure your instance defines a worker container such that: - `WORKER_JOB_ALLOWLIST` contains "codeintel-commitgraph-updater" (or "all"), and - `WORKER_JOB_BLOCKLIST` does not contain "codeintel-commitgraph-updater" - Ensure that such a container is not failing to start or stay active - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-worker-job-codeintel-commitgraph-updater-count). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_worker_job_codeintel-commitgraph-updater_count", "critical_worker_worker_job_codeintel-commitgraph-updater_count" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*number of worker instances running the codeintel-autoindexing-scheduler job
**Descriptions** - warning worker: less than 1 number of worker instances running the codeintel-autoindexing-scheduler job for 1m0s - critical worker: less than 1 number of worker instances running the codeintel-autoindexing-scheduler job for 5m0s **Next steps** - Ensure your instance defines a worker container such that: - `WORKER_JOB_ALLOWLIST` contains "codeintel-autoindexing-scheduler" (or "all"), and - `WORKER_JOB_BLOCKLIST` does not contain "codeintel-autoindexing-scheduler" - Ensure that such a container is not failing to start or stay active - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-worker-job-codeintel-autoindexing-scheduler-count). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_worker_job_codeintel-autoindexing-scheduler_count", "critical_worker_worker_job_codeintel-autoindexing-scheduler_count" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*repository queue longest time in queue
**Descriptions** - warning worker: 3600s+ repository queue longest time in queue **Next steps** - An alert here is generally indicative of either underprovisioned worker instance(s) and/or an underprovisioned main postgres instance. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-codeintel-commit-graph-queued-max-age). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_codeintel_commit_graph_queued_max_age" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*number of entities with outdated permissions
**Descriptions** - warning worker: 100+ number of entities with outdated permissions for 5m0s **Next steps** - **Enabled permissions for the first time:** Wait for few minutes and see if the number goes down. - **Otherwise:** Increase the API rate limit to [GitHub](https://sourcegraph.com/docs/admin/code_hosts/github#github-com-rate-limits), [GitLab](https://sourcegraph.com/docs/admin/code_hosts/gitlab#internal-rate-limits) or [Bitbucket Server](https://sourcegraph.com/docs/admin/code_hosts/bitbucket_server#internal-rate-limits). - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-perms-syncer-outdated-perms). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_perms_syncer_outdated_perms" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th permissions sync duration
**Descriptions** - warning worker: 30s+ 95th permissions sync duration for 5m0s **Next steps** - Check the network latency is reasonable (<50ms) between the Sourcegraph and the code host. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-perms-syncer-sync-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_perms_syncer_sync_duration" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*permissions sync error rate
**Descriptions** - critical worker: 1+ permissions sync error rate for 1m0s **Next steps** - Check the network connectivity the Sourcegraph and the code host. - Check if API rate limit quota is exhausted on the code host. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-perms-syncer-sync-errors). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_worker_perms_syncer_sync_errors" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*insights queue size that is not utilized (not processing)
**Descriptions** - warning worker: 0+ insights queue size that is not utilized (not processing) for 30m0s **Next steps** - Verify code insights worker job has successfully started. Restart worker service and monitoring startup logs, looking for worker panics. - More help interpreting this metric is available in the [dashboards reference](dashboards#worker-insights-queue-unutilized-size). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_insights_queue_unutilized_size" ] ``` *Managed by the [Sourcegraph Code Search team](https://handbook.sourcegraph.com/departments/engineering/teams/code-search).*mean blocked seconds per conn request
**Descriptions** - warning worker: 0.1s+ mean blocked seconds per conn request for 10m0s - critical worker: 0.5s+ mean blocked seconds per conn request for 10m0s **Next steps** - Increase SRC_PGSQL_MAX_OPEN together with giving more memory to the database if needed - Scale up Postgres memory/cpus - [see our scaling guide](https://sourcegraph.com/docs/admin/config/postgres-conf) - If using GCP Cloud SQL, check for high lock waits or CPU usage in query insights - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-mean-blocked-seconds-per-conn-request). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_mean_blocked_seconds_per_conn_request", "critical_worker_mean_blocked_seconds_per_conn_request" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning worker: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage by instance
**Descriptions** - warning worker: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_container_memory_usage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning worker: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the worker service. - **Docker Compose:** Consider increasing `cpus:` of the worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage (1d maximum) by instance
**Descriptions** - warning worker: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the worker service. - **Docker Compose:** Consider increasing `memory:` of the worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning worker: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage (5m maximum) by instance
**Descriptions** - warning worker: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of worker container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container OOMKILL events total by instance
**Descriptions** - warning worker: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of worker container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#worker-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum active goroutines
**Descriptions** - warning worker: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#worker-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_go_goroutines" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum go garbage collection duration
**Descriptions** - warning worker: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_worker_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*percentage pods available
**Descriptions** - critical worker: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod worker` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p worker`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_worker_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum duration since last successful site configuration update (all "worker" instances)
**Descriptions** - critical worker: 300s+ maximum duration since last successful site configuration update (all "worker" instances) **Next steps** - This indicates that one or more "worker" instances have not successfully updated the site configuration in over 5 minutes. This could be due to networking issues between services or problems with the site configuration service itself. - Check for relevant errors in the "worker" logs, as well as frontend`s logs. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#worker-worker-site-configuration-duration-since-last-successful-update-by-instance). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_worker_worker_site_configuration_duration_since_last_successful_update_by_instance" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*time since oldest sync
**Descriptions** - critical repo-updater: 32400s+ time since oldest sync for 10m0s **Next steps** - An alert here indicates that no code host connections have synced in at least 9h0m0s. This indicates that there could be a configuration issue with your code hosts connections or networking issues affecting communication with your code hosts. - Check the code host status indicator (cloud icon in top right of Sourcegraph homepage) for errors. - Make sure external services do not have invalid tokens by navigating to them in the web UI and clicking save. If there are no errors, they are valid. - Check the repo-updater logs for errors about syncing. - Confirm that outbound network connections are allowed where repo-updater is deployed. - Check back in an hour to see if the issue has resolved itself. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-src-repoupdater-max-sync-backoff). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_repo-updater_src_repoupdater_max_sync_backoff" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*site level external service sync error rate
**Descriptions** - warning repo-updater: 0.5+ site level external service sync error rate for 10m0s - critical repo-updater: 1+ site level external service sync error rate for 10m0s **Next steps** - An alert here indicates errors syncing site level repo metadata with code hosts. This indicates that there could be a configuration issue with your code hosts connections or networking issues affecting communication with your code hosts. - Check the code host status indicator (cloud icon in top right of Sourcegraph homepage) for errors. - Make sure external services do not have invalid tokens by navigating to them in the web UI and clicking save. If there are no errors, they are valid. - Check the repo-updater logs for errors about syncing. - Confirm that outbound network connections are allowed where repo-updater is deployed. - Check back in an hour to see if the issue has resolved itself. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-src-repoupdater-syncer-sync-errors-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_src_repoupdater_syncer_sync_errors_total", "critical_repo-updater_src_repoupdater_syncer_sync_errors_total" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repo metadata sync was started
**Descriptions** - warning repo-updater: less than 0 repo metadata sync was started for 9h0m0s **Next steps** - Check repo-updater logs for errors. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-syncer-sync-start). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_syncer_sync_start" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th repositories sync duration
**Descriptions** - warning repo-updater: 30s+ 95th repositories sync duration for 5m0s **Next steps** - Check the network latency is reasonable (<50ms) between the Sourcegraph and the code host - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-syncer-sync-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_syncer_sync_duration" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*95th repositories source duration
**Descriptions** - warning repo-updater: 30s+ 95th repositories source duration for 5m0s **Next steps** - Check the network latency is reasonable (<50ms) between the Sourcegraph and the code host - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-source-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_source_duration" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repositories synced
**Descriptions** - warning repo-updater: less than 0 repositories synced for 9h0m0s **Next steps** - Check network connectivity to code hosts - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-syncer-synced-repos). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_syncer_synced_repos" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repositories sourced
**Descriptions** - warning repo-updater: less than 0 repositories sourced for 9h0m0s **Next steps** - Check network connectivity to code hosts - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-sourced-repos). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_sourced_repos" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repositories purge failed
**Descriptions** - warning repo-updater: 0+ repositories purge failed for 5m0s **Next steps** - Check repo-updater`s connectivity with gitserver and gitserver logs - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-purge-failed). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_purge_failed" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repositories scheduled due to hitting a deadline
**Descriptions** - warning repo-updater: less than 0 repositories scheduled due to hitting a deadline for 9h0m0s **Next steps** - Check repo-updater logs. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-sched-auto-fetch). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_sched_auto_fetch" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repositories managed by the scheduler
**Descriptions** - warning repo-updater: less than 0 repositories managed by the scheduler for 10m0s **Next steps** - Check repo-updater logs. This is expected to fire if there are no user added code hosts - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-sched-known-repos). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_sched_known_repos" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*rate of growth of update queue length over 5 minutes
**Descriptions** - critical repo-updater: 0+ rate of growth of update queue length over 5 minutes for 2h0m0s **Next steps** - Check repo-updater logs for indications that the queue is not being processed. The queue length should trend downwards over time as items are sent to GitServer - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-sched-update-queue-length). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_repo-updater_sched_update_queue_length" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*scheduler loops
**Descriptions** - warning repo-updater: less than 0 scheduler loops for 9h0m0s **Next steps** - Check repo-updater logs for errors. This is expected to fire if there are no user added code hosts - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-sched-loops). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_sched_loops" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repos that haven't been fetched in more than 8 hours
**Descriptions** - warning repo-updater: 1+ repos that haven't been fetched in more than 8 hours for 25m0s **Next steps** - Check repo-updater logs for errors. Check for rows in gitserver_repos where LastError is not an empty string. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-src-repoupdater-stale-repos). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_src_repoupdater_stale_repos" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*repositories schedule error rate
**Descriptions** - critical repo-updater: 1+ repositories schedule error rate for 25m0s **Next steps** - Check repo-updater logs for errors - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-sched-error). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_repo-updater_sched_error" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*the total number of external services
**Descriptions** - critical repo-updater: 20000+ the total number of external services for 1h0m0s **Next steps** - Check for spikes in external services, could be abuse - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-src-repoupdater-external-services-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_repo-updater_src_repoupdater_external_services_total" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*the total number of queued sync jobs
**Descriptions** - warning repo-updater: 100+ the total number of queued sync jobs for 1h0m0s **Next steps** - **Check if jobs are failing to sync:** "SELECT * FROM external_service_sync_jobs WHERE state = `errored`"; - **Increase the number of workers** using the `repoConcurrentExternalServiceSyncers` site config. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-repoupdater-queued-sync-jobs-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_repoupdater_queued_sync_jobs_total" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*the total number of completed sync jobs
**Descriptions** - warning repo-updater: 100000+ the total number of completed sync jobs for 1h0m0s **Next steps** - Check repo-updater logs. Jobs older than 1 day should have been removed. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-repoupdater-completed-sync-jobs-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_repoupdater_completed_sync_jobs_total" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*the percentage of external services that have failed their most recent sync
**Descriptions** - warning repo-updater: 10%+ the percentage of external services that have failed their most recent sync for 1h0m0s **Next steps** - Check repo-updater logs. Check code host connectivity - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-repoupdater-errored-sync-jobs-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_repoupdater_errored_sync_jobs_percentage" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*remaining calls to GitHub graphql API before hitting the rate limit
**Descriptions** - warning repo-updater: less than 250 remaining calls to GitHub graphql API before hitting the rate limit **Next steps** - Consider creating a new token for the indicated resource (the `name` label for series below the threshold in the dashboard) under a dedicated machine user to reduce rate limit pressure. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-github-graphql-rate-limit-remaining). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_github_graphql_rate_limit_remaining" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*remaining calls to GitHub rest API before hitting the rate limit
**Descriptions** - warning repo-updater: less than 250 remaining calls to GitHub rest API before hitting the rate limit **Next steps** - Consider creating a new token for the indicated resource (the `name` label for series below the threshold in the dashboard) under a dedicated machine user to reduce rate limit pressure. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-github-rest-rate-limit-remaining). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_github_rest_rate_limit_remaining" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*remaining calls to GitHub search API before hitting the rate limit
**Descriptions** - warning repo-updater: less than 5 remaining calls to GitHub search API before hitting the rate limit **Next steps** - Consider creating a new token for the indicated resource (the `name` label for series below the threshold in the dashboard) under a dedicated machine user to reduce rate limit pressure. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-github-search-rate-limit-remaining). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_github_search_rate_limit_remaining" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*remaining calls to GitLab rest API before hitting the rate limit
**Descriptions** - critical repo-updater: less than 30 remaining calls to GitLab rest API before hitting the rate limit **Next steps** - Try restarting the pod to get a different public IP. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-gitlab-rest-rate-limit-remaining). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_repo-updater_gitlab_rest_rate_limit_remaining" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*maximum duration since last successful site configuration update (all "repo_updater" instances)
**Descriptions** - critical repo-updater: 300s+ maximum duration since last successful site configuration update (all "repo_updater" instances) **Next steps** - This indicates that one or more "repo_updater" instances have not successfully updated the site configuration in over 5 minutes. This could be due to networking issues between services or problems with the site configuration service itself. - Check for relevant errors in the "repo_updater" logs, as well as frontend`s logs. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-repo-updater-site-configuration-duration-since-last-successful-update-by-instance). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_repo-updater_repo_updater_site_configuration_duration_since_last_successful_update_by_instance" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*mean blocked seconds per conn request
**Descriptions** - warning repo-updater: 0.1s+ mean blocked seconds per conn request for 10m0s - critical repo-updater: 0.5s+ mean blocked seconds per conn request for 10m0s **Next steps** - Increase SRC_PGSQL_MAX_OPEN together with giving more memory to the database if needed - Scale up Postgres memory/cpus - [see our scaling guide](https://sourcegraph.com/docs/admin/config/postgres-conf) - If using GCP Cloud SQL, check for high lock waits or CPU usage in query insights - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-mean-blocked-seconds-per-conn-request). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_mean_blocked_seconds_per_conn_request", "critical_repo-updater_mean_blocked_seconds_per_conn_request" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning repo-updater: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the repo-updater container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container memory usage by instance
**Descriptions** - critical repo-updater: 90%+ container memory usage by instance for 10m0s **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of repo-updater container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_repo-updater_container_memory_usage" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning repo-updater: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the repo-updater service. - **Docker Compose:** Consider increasing `cpus:` of the repo-updater container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container memory usage (1d maximum) by instance
**Descriptions** - warning repo-updater: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the repo-updater service. - **Docker Compose:** Consider increasing `memory:` of the repo-updater container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning repo-updater: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the repo-updater container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container memory usage (5m maximum) by instance
**Descriptions** - warning repo-updater: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of repo-updater container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*container OOMKILL events total by instance
**Descriptions** - warning repo-updater: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of repo-updater container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#repo-updater-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*maximum active goroutines
**Descriptions** - warning repo-updater: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#repo-updater-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_go_goroutines" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*maximum go garbage collection duration
**Descriptions** - warning repo-updater: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_repo-updater_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*percentage pods available
**Descriptions** - critical repo-updater: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod repo-updater` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p repo-updater`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#repo-updater-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_repo-updater_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Source team](https://handbook.sourcegraph.com/departments/engineering/teams/source).*requests per second per replica over 10m
**Descriptions** - warning searcher: 5+ requests per second per replica over 10m **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#searcher-replica-traffic). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_replica_traffic" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*unindexed search request errors every 5m by code
**Descriptions** - warning searcher: 5%+ unindexed search request errors every 5m by code for 5m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-unindexed-search-request-errors). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_unindexed_search_request_errors" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*maximum duration since last successful site configuration update (all "searcher" instances)
**Descriptions** - critical searcher: 300s+ maximum duration since last successful site configuration update (all "searcher" instances) **Next steps** - This indicates that one or more "searcher" instances have not successfully updated the site configuration in over 5 minutes. This could be due to networking issues between services or problems with the site configuration service itself. - Check for relevant errors in the "searcher" logs, as well as frontend`s logs. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-searcher-site-configuration-duration-since-last-successful-update-by-instance). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_searcher_searcher_site_configuration_duration_since_last_successful_update_by_instance" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*mean blocked seconds per conn request
**Descriptions** - warning searcher: 0.1s+ mean blocked seconds per conn request for 10m0s - critical searcher: 0.5s+ mean blocked seconds per conn request for 10m0s **Next steps** - Increase SRC_PGSQL_MAX_OPEN together with giving more memory to the database if needed - Scale up Postgres memory/cpus - [see our scaling guide](https://sourcegraph.com/docs/admin/config/postgres-conf) - If using GCP Cloud SQL, check for high lock waits or CPU usage in query insights - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-mean-blocked-seconds-per-conn-request). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_mean_blocked_seconds_per_conn_request", "critical_searcher_mean_blocked_seconds_per_conn_request" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning searcher: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the searcher container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage by instance
**Descriptions** - warning searcher: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of searcher container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_container_memory_usage" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning searcher: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the searcher service. - **Docker Compose:** Consider increasing `cpus:` of the searcher container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage (1d maximum) by instance
**Descriptions** - warning searcher: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the searcher service. - **Docker Compose:** Consider increasing `memory:` of the searcher container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning searcher: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the searcher container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage (5m maximum) by instance
**Descriptions** - warning searcher: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of searcher container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container OOMKILL events total by instance
**Descriptions** - warning searcher: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of searcher container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#searcher-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*maximum active goroutines
**Descriptions** - warning searcher: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#searcher-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_go_goroutines" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*maximum go garbage collection duration
**Descriptions** - warning searcher: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_searcher_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*percentage pods available
**Descriptions** - critical searcher: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod searcher` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p searcher`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#searcher-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_searcher_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*maximum duration since last successful site configuration update (all "symbols" instances)
**Descriptions** - critical symbols: 300s+ maximum duration since last successful site configuration update (all "symbols" instances) **Next steps** - This indicates that one or more "symbols" instances have not successfully updated the site configuration in over 5 minutes. This could be due to networking issues between services or problems with the site configuration service itself. - Check for relevant errors in the "symbols" logs, as well as frontend`s logs. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-symbols-site-configuration-duration-since-last-successful-update-by-instance). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_symbols_symbols_site_configuration_duration_since_last_successful_update_by_instance" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*mean blocked seconds per conn request
**Descriptions** - warning symbols: 0.1s+ mean blocked seconds per conn request for 10m0s - critical symbols: 0.5s+ mean blocked seconds per conn request for 10m0s **Next steps** - Increase SRC_PGSQL_MAX_OPEN together with giving more memory to the database if needed - Scale up Postgres memory/cpus - [see our scaling guide](https://sourcegraph.com/docs/admin/config/postgres-conf) - If using GCP Cloud SQL, check for high lock waits or CPU usage in query insights - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-mean-blocked-seconds-per-conn-request). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_mean_blocked_seconds_per_conn_request", "critical_symbols_mean_blocked_seconds_per_conn_request" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning symbols: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the symbols container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage by instance
**Descriptions** - warning symbols: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of symbols container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_container_memory_usage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning symbols: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the symbols service. - **Docker Compose:** Consider increasing `cpus:` of the symbols container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage (1d maximum) by instance
**Descriptions** - warning symbols: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the symbols service. - **Docker Compose:** Consider increasing `memory:` of the symbols container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning symbols: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the symbols container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container memory usage (5m maximum) by instance
**Descriptions** - warning symbols: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of symbols container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container OOMKILL events total by instance
**Descriptions** - warning symbols: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of symbols container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#symbols-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum active goroutines
**Descriptions** - warning symbols: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#symbols-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_go_goroutines" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum go garbage collection duration
**Descriptions** - warning symbols: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_symbols_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*percentage pods available
**Descriptions** - critical symbols: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod symbols` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p symbols`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#symbols-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_symbols_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning syntect-server: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the syntect-server container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#syntect-server-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_syntect-server_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage by instance
**Descriptions** - warning syntect-server: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of syntect-server container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#syntect-server-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_syntect-server_container_memory_usage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning syntect-server: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the syntect-server service. - **Docker Compose:** Consider increasing `cpus:` of the syntect-server container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#syntect-server-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_syntect-server_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (1d maximum) by instance
**Descriptions** - warning syntect-server: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the syntect-server service. - **Docker Compose:** Consider increasing `memory:` of the syntect-server container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#syntect-server-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_syntect-server_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning syntect-server: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the syntect-server container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#syntect-server-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_syntect-server_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (5m maximum) by instance
**Descriptions** - warning syntect-server: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of syntect-server container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#syntect-server-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_syntect-server_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container OOMKILL events total by instance
**Descriptions** - warning syntect-server: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of syntect-server container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#syntect-server-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_syntect-server_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*percentage pods available
**Descriptions** - critical syntect-server: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod syntect-server` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p syntect-server`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#syntect-server-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_syntect-server_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*average resolve revision duration over 5m
**Descriptions** - warning zoekt: 15s+ average resolve revision duration over 5m **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-average-resolve-revision-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_average_resolve_revision_duration" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*the number of repositories we failed to get indexing options over 5m
**Descriptions** - warning zoekt: 100+ the number of repositories we failed to get indexing options over 5m for 5m0s - critical zoekt: 100+ the number of repositories we failed to get indexing options over 5m for 35m0s **Next steps** - View error rates on gitserver and frontend to identify root cause. - Rollback frontend/gitserver deployment if due to a bad code change. - More help interpreting this metric is available in the [dashboards reference](dashboards#zoekt-get-index-options-error-increase). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_get_index_options_error_increase", "critical_zoekt_get_index_options_error_increase" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*indexed search request errors every 5m by code
**Descriptions** - warning zoekt: 5%+ indexed search request errors every 5m by code for 5m0s **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-indexed-search-request-errors). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_indexed_search_request_errors" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*process memory map areas percentage used (per instance)
**Descriptions** - warning zoekt: 60%+ process memory map areas percentage used (per instance) - critical zoekt: 80%+ process memory map areas percentage used (per instance) **Next steps** - If you are running out of memory map areas, you could resolve this by: - Enabling shard merging for Zoekt: Set SRC_ENABLE_SHARD_MERGING="1" for zoekt-indexserver. Use this option if your corpus of repositories has a high percentage of small, rarely updated repositories. See [documentation](https://sourcegraph.com/docs/code-search/features#shard-merging). - Creating additional Zoekt replicas: This spreads all the shards out amongst more replicas, which means that each _individual_ replica will have fewer shards. This, in turn, decreases the amount of memory map areas that a _single_ replica can create (in order to load the shards into memory). - Increasing the virtual memory subsystem`s "max_map_count" parameter which defines the upper limit of memory areas a process can use. The default value of max_map_count is usually 65536. We recommend to set this value to 2x the number of repos to be indexed per Zoekt instance. This means, if you want to index 240k repositories with 3 Zoekt instances, set max_map_count to (240000 / 3) * 2 = 160000. The exact instructions for tuning this parameter can differ depending on your environment. See https://kernel.org/doc/Documentation/sysctl/vm.txt for more information. - More help interpreting this metric is available in the [dashboards reference](dashboards#zoekt-memory-map-areas-percentage-used). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_memory_map_areas_percentage_used", "critical_zoekt_memory_map_areas_percentage_used" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning zoekt: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the zoekt-indexserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage by instance
**Descriptions** - warning zoekt: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of zoekt-indexserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_container_memory_usage" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning zoekt: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage by instance
**Descriptions** - warning zoekt: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of zoekt-webserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_container_memory_usage" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning zoekt: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the zoekt-indexserver service. - **Docker Compose:** Consider increasing `cpus:` of the zoekt-indexserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage (1d maximum) by instance
**Descriptions** - warning zoekt: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the zoekt-indexserver service. - **Docker Compose:** Consider increasing `memory:` of the zoekt-indexserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning zoekt: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the zoekt-indexserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage (5m maximum) by instance
**Descriptions** - warning zoekt: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of zoekt-indexserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container OOMKILL events total by instance
**Descriptions** - warning zoekt: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of zoekt-indexserver container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#zoekt-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning zoekt: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the zoekt-webserver service. - **Docker Compose:** Consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage (1d maximum) by instance
**Descriptions** - warning zoekt: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the zoekt-webserver service. - **Docker Compose:** Consider increasing `memory:` of the zoekt-webserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning zoekt: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the zoekt-webserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container memory usage (5m maximum) by instance
**Descriptions** - warning zoekt: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of zoekt-webserver container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*container OOMKILL events total by instance
**Descriptions** - warning zoekt: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of zoekt-webserver container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#zoekt-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_zoekt_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*percentage pods available
**Descriptions** - critical zoekt: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod indexed-search` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p indexed-search`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#zoekt-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_zoekt_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Search Platform team](https://handbook.sourcegraph.com/departments/engineering/teams/search/core).*average prometheus rule group evaluation duration over 10m by rule group
**Descriptions** - warning prometheus: 30s+ average prometheus rule group evaluation duration over 10m by rule group **Next steps** - Check the Container monitoring (not available on server) panels and try increasing resources for Prometheus if necessary. - If the rule group taking a long time to evaluate belongs to `/sg_prometheus_addons`, try reducing the complexity of any custom Prometheus rules provided. - If the rule group taking a long time to evaluate belongs to `/sg_config_prometheus`, please [open an issue](https://github.com/sourcegraph/sourcegraph/issues/new?assignees=&labels=&template=bug_report.md&title=). - More help interpreting this metric is available in the [dashboards reference](dashboards#prometheus-prometheus-rule-eval-duration). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_prometheus_rule_eval_duration" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*failed prometheus rule evaluations over 5m by rule group
**Descriptions** - warning prometheus: 0+ failed prometheus rule evaluations over 5m by rule group **Next steps** - Check Prometheus logs for messages related to rule group evaluation (generally with log field `component="rule manager"`). - If the rule group failing to evaluate belongs to `/sg_prometheus_addons`, ensure any custom Prometheus configuration provided is valid. - If the rule group taking a long time to evaluate belongs to `/sg_config_prometheus`, please [open an issue](https://github.com/sourcegraph/sourcegraph/issues/new?assignees=&labels=&template=bug_report.md&title=). - More help interpreting this metric is available in the [dashboards reference](dashboards#prometheus-prometheus-rule-eval-failures). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_prometheus_rule_eval_failures" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*alertmanager notification latency over 1m by integration
**Descriptions** - warning prometheus: 1s+ alertmanager notification latency over 1m by integration **Next steps** - Check the Container monitoring (not available on server) panels and try increasing resources for Prometheus if necessary. - Ensure that your [`observability.alerts` configuration](https://sourcegraph.com/docs/admin/observability/alerting#setting-up-alerting) (in site configuration) is valid. - Check if the relevant alert integration service is experiencing downtime or issues. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-alertmanager-notification-latency). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_alertmanager_notification_latency" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*failed alertmanager notifications over 1m by integration
**Descriptions** - warning prometheus: 0+ failed alertmanager notifications over 1m by integration **Next steps** - Ensure that your [`observability.alerts` configuration](https://sourcegraph.com/docs/admin/observability/alerting#setting-up-alerting) (in site configuration) is valid. - Check if the relevant alert integration service is experiencing downtime or issues. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-alertmanager-notification-failures). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_alertmanager_notification_failures" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*prometheus configuration reload status
**Descriptions** - warning prometheus: less than 1 prometheus configuration reload status **Next steps** - Check Prometheus logs for messages related to configuration loading. - Ensure any [custom configuration you have provided Prometheus](https://sourcegraph.com/docs/admin/observability/metrics#prometheus-configuration) is valid. - More help interpreting this metric is available in the [dashboards reference](dashboards#prometheus-prometheus-config-status). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_prometheus_config_status" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*alertmanager configuration reload status
**Descriptions** - warning prometheus: less than 1 alertmanager configuration reload status **Next steps** - Ensure that your [`observability.alerts` configuration](https://sourcegraph.com/docs/admin/observability/alerting#setting-up-alerting) (in site configuration) is valid. - More help interpreting this metric is available in the [dashboards reference](dashboards#prometheus-alertmanager-config-status). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_alertmanager_config_status" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*prometheus tsdb failures by operation over 1m by operation
**Descriptions** - warning prometheus: 0+ prometheus tsdb failures by operation over 1m by operation **Next steps** - Check Prometheus logs for messages related to the failing operation. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-prometheus-tsdb-op-failure). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_prometheus_tsdb_op_failure" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*prometheus scrapes that exceed the sample limit over 10m
**Descriptions** - warning prometheus: 0+ prometheus scrapes that exceed the sample limit over 10m **Next steps** - Check Prometheus logs for messages related to target scrape failures. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-prometheus-target-sample-exceeded). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_prometheus_target_sample_exceeded" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*prometheus scrapes rejected due to duplicate timestamps over 10m
**Descriptions** - warning prometheus: 0+ prometheus scrapes rejected due to duplicate timestamps over 10m **Next steps** - Check Prometheus logs for messages related to target scrape failures. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-prometheus-target-sample-duplicate). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_prometheus_target_sample_duplicate" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning prometheus: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the prometheus container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage by instance
**Descriptions** - warning prometheus: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of prometheus container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_container_memory_usage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning prometheus: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the prometheus service. - **Docker Compose:** Consider increasing `cpus:` of the prometheus container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (1d maximum) by instance
**Descriptions** - warning prometheus: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the prometheus service. - **Docker Compose:** Consider increasing `memory:` of the prometheus container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning prometheus: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the prometheus container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage (5m maximum) by instance
**Descriptions** - warning prometheus: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of prometheus container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container OOMKILL events total by instance
**Descriptions** - warning prometheus: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of prometheus container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#prometheus-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_prometheus_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*percentage pods available
**Descriptions** - critical prometheus: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod prometheus` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p prometheus`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#prometheus-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_prometheus_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*executor active handlers
**Descriptions** - critical executor: 0 active executor handlers and > 0 queue size for 5m0s **Next steps** - Check to see the state of any compute VMs, they may be taking longer than expected to boot. - Make sure the executors appear under Site Admin > Executors. - Check the Grafana dashboard section for APIClient, it should do frequent requests to Dequeue and Heartbeat and those must not fail. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#executor-executor-handlers). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_executor_executor_handlers" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*executor operation error rate over 5m
**Descriptions** - warning executor: 100%+ executor operation error rate over 5m for 1h0m0s **Next steps** - Determine the cause of failure from the auto-indexing job logs in the site-admin page. - This alert fires if all executor jobs have been failing for the past hour. The alert will continue for up to 5 hours until the error rate is no longer 100%, even if there are no running jobs in that time, as the problem is not know to be resolved until jobs start succeeding again. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#executor-executor-processor-error-rate). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_executor_executor_processor_error_rate" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum active goroutines
**Descriptions** - warning executor: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#executor-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_executor_go_goroutines" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*maximum go garbage collection duration
**Descriptions** - warning executor: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#executor-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_executor_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*repository queue longest time in queue
**Descriptions** - warning codeintel-uploads: 3600s+ repository queue longest time in queue **Next steps** - An alert here is generally indicative of either underprovisioned worker instance(s) and/or an underprovisioned main postgres instance. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#codeintel-uploads-codeintel-commit-graph-queued-max-age). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_codeintel-uploads_codeintel_commit_graph_queued_max_age" ] ``` *Managed by the [Sourcegraph Code intelligence team](https://handbook.sourcegraph.com/departments/engineering/teams/code-intelligence).*rate of growth of export queue over 30m
**Descriptions** - warning telemetry: 1+ rate of growth of export queue over 30m for 1h0m0s - critical telemetry: 1+ rate of growth of export queue over 30m for 36h0m0s **Next steps** - Check the "number of events exported per batch over 30m" dashboard panel to see if export throughput is at saturation. - Increase `TELEMETRY_GATEWAY_EXPORTER_EXPORT_BATCH_SIZE` to export more events per batch. - Reduce `TELEMETRY_GATEWAY_EXPORTER_EXPORT_INTERVAL` to schedule more export jobs. - See worker logs in the `worker.telemetrygateway-exporter` log scope for more details to see if any export errors are occuring - if logs only indicate that exports failed, reach out to Sourcegraph with relevant log entries, as this may be an issue in Sourcegraph`s Telemetry Gateway service. - More help interpreting this metric is available in the [dashboards reference](dashboards#telemetry-telemetry-gateway-exporter-queue-growth). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_telemetry_telemetry_gateway_exporter_queue_growth", "critical_telemetry_telemetry_gateway_exporter_queue_growth" ] ``` *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*events exporter operation errors every 30m
**Descriptions** - warning telemetry: 0+ events exporter operation errors every 30m **Next steps** - Failures indicate that exporting of telemetry events from Sourcegraph are failing. This may affect the performance of the database as the backlog grows. - See worker logs in the `worker.telemetrygateway-exporter` log scope for more details. If logs only indicate that exports failed, reach out to Sourcegraph with relevant log entries, as this may be an issue in Sourcegraph`s Telemetry Gateway service. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#telemetry-telemetrygatewayexporter-exporter-errors-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_telemetry_telemetrygatewayexporter_exporter_errors_total" ] ``` *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*export queue cleanup operation errors every 30m
**Descriptions** - warning telemetry: 0+ export queue cleanup operation errors every 30m **Next steps** - Failures indicate that pruning of already-exported telemetry events from the database is failing. This may affect the performance of the database as the export queue table grows. - See worker logs in the `worker.telemetrygateway-exporter` log scope for more details. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#telemetry-telemetrygatewayexporter-queue-cleanup-errors-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_telemetry_telemetrygatewayexporter_queue_cleanup_errors_total" ] ``` *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*export backlog metrics reporting operation errors every 30m
**Descriptions** - warning telemetry: 0+ export backlog metrics reporting operation errors every 30m **Next steps** - Failures indicate that reporting of telemetry events metrics is failing. This may affect the reliability of telemetry events export metrics. - See worker logs in the `worker.telemetrygateway-exporter` log scope for more details. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#telemetry-telemetrygatewayexporter-queue-metrics-reporter-errors-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_telemetry_telemetrygatewayexporter_queue_metrics_reporter_errors_total" ] ``` *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*usage data exporter operation error rate over 5m
**Descriptions** - warning telemetry: 0%+ usage data exporter operation error rate over 5m for 30m0s **Next steps** - Involved cloud team to inspect logs of the managed instance to determine error sources. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#telemetry-telemetry-job-error-rate). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_telemetry_telemetry_job_error_rate" ] ``` *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*utilized percentage of maximum throughput
**Descriptions** - warning telemetry: 90%+ utilized percentage of maximum throughput for 30m0s **Next steps** - Throughput utilization is high. This could be a signal that this instance is producing too many events for the export job to keep up. Configure more throughput using the maxBatchSize option. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#telemetry-telemetry-job-utilized-throughput). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_telemetry_telemetry_job_utilized_throughput" ] ``` *Managed by the [Sourcegraph Data & Analytics team](https://handbook.sourcegraph.com/departments/engineering/teams/data-analytics).*spans refused per receiver
**Descriptions** - warning otel-collector: 1+ spans refused per receiver for 5m0s **Next steps** - Check logs of the collector and configuration of the receiver - More help interpreting this metric is available in the [dashboards reference](dashboards#otel-collector-otel-span-refused). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_otel-collector_otel_span_refused" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*span export failures by exporter
**Descriptions** - warning otel-collector: 1+ span export failures by exporter for 5m0s **Next steps** - Check the configuration of the exporter and if the service being exported is up - More help interpreting this metric is available in the [dashboards reference](dashboards#otel-collector-otel-span-export-failures). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_otel-collector_otel_span_export_failures" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*exporter enqueue failed spans
**Descriptions** - warning otel-collector: 0+ exporter enqueue failed spans for 5m0s **Next steps** - Check the configuration of the exporter and if the service being exported is up. This may be caused by a queue full of unsettled elements, so you may need to decrease your sending rate or horizontally scale collectors. - More help interpreting this metric is available in the [dashboards reference](dashboards#otel-collector-otelcol-exporter-enqueue-failed-spans). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_otel-collector_otelcol_exporter_enqueue_failed_spans" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*spans dropped per processor per minute
**Descriptions** - warning otel-collector: 0+ spans dropped per processor per minute for 5m0s **Next steps** - Check the configuration of the processor - More help interpreting this metric is available in the [dashboards reference](dashboards#otel-collector-otelcol-processor-dropped-spans). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_otel-collector_otelcol_processor_dropped_spans" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning otel-collector: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the otel-collector container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#otel-collector-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_otel-collector_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*container memory usage by instance
**Descriptions** - warning otel-collector: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of otel-collector container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#otel-collector-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_otel-collector_container_memory_usage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*percentage pods available
**Descriptions** - critical otel-collector: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod otel-collector` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p otel-collector`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#otel-collector-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_otel-collector_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*maximum duration since last successful site configuration update (all "embeddings" instances)
**Descriptions** - critical embeddings: 300s+ maximum duration since last successful site configuration update (all "embeddings" instances) **Next steps** - This indicates that one or more "embeddings" instances have not successfully updated the site configuration in over 5 minutes. This could be due to networking issues between services or problems with the site configuration service itself. - Check for relevant errors in the "embeddings" logs, as well as frontend`s logs. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-embeddings-site-configuration-duration-since-last-successful-update-by-instance). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_embeddings_embeddings_site_configuration_duration_since_last_successful_update_by_instance" ] ``` *Managed by the [Sourcegraph Infrastructure Org team](https://handbook.sourcegraph.com/departments/engineering/infrastructure).*mean blocked seconds per conn request
**Descriptions** - warning embeddings: 0.1s+ mean blocked seconds per conn request for 10m0s - critical embeddings: 0.5s+ mean blocked seconds per conn request for 10m0s **Next steps** - Increase SRC_PGSQL_MAX_OPEN together with giving more memory to the database if needed - Scale up Postgres memory/cpus - [see our scaling guide](https://sourcegraph.com/docs/admin/config/postgres-conf) - If using GCP Cloud SQL, check for high lock waits or CPU usage in query insights - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-mean-blocked-seconds-per-conn-request). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_mean_blocked_seconds_per_conn_request", "critical_embeddings_mean_blocked_seconds_per_conn_request" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*container cpu usage total (1m average) across all cores by instance
**Descriptions** - warning embeddings: 99%+ container cpu usage total (1m average) across all cores by instance **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the embeddings container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-container-cpu-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_container_cpu_usage" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*container memory usage by instance
**Descriptions** - warning embeddings: 99%+ container memory usage by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of embeddings container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-container-memory-usage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_container_memory_usage" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*container cpu usage total (90th percentile over 1d) across all cores by instance
**Descriptions** - warning embeddings: 80%+ container cpu usage total (90th percentile over 1d) across all cores by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the `Deployment.yaml` for the embeddings service. - **Docker Compose:** Consider increasing `cpus:` of the embeddings container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-provisioning-container-cpu-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_provisioning_container_cpu_usage_long_term" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*container memory usage (1d maximum) by instance
**Descriptions** - warning embeddings: 80%+ container memory usage (1d maximum) by instance for 336h0m0s **Next steps** - **Kubernetes:** Consider increasing memory limits in the `Deployment.yaml` for the embeddings service. - **Docker Compose:** Consider increasing `memory:` of the embeddings container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-provisioning-container-memory-usage-long-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_provisioning_container_memory_usage_long_term" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*container cpu usage total (5m maximum) across all cores by instance
**Descriptions** - warning embeddings: 90%+ container cpu usage total (5m maximum) across all cores by instance for 30m0s **Next steps** - **Kubernetes:** Consider increasing CPU limits in the the relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `cpus:` of the embeddings container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-provisioning-container-cpu-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_provisioning_container_cpu_usage_short_term" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*container memory usage (5m maximum) by instance
**Descriptions** - warning embeddings: 90%+ container memory usage (5m maximum) by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of embeddings container in `docker-compose.yml`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-provisioning-container-memory-usage-short-term). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_provisioning_container_memory_usage_short_term" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*container OOMKILL events total by instance
**Descriptions** - warning embeddings: 1+ container OOMKILL events total by instance **Next steps** - **Kubernetes:** Consider increasing memory limit in relevant `Deployment.yaml`. - **Docker Compose:** Consider increasing `memory:` of embeddings container in `docker-compose.yml`. - More help interpreting this metric is available in the [dashboards reference](dashboards#embeddings-container-oomkill-events-total). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_container_oomkill_events_total" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*maximum active goroutines
**Descriptions** - warning embeddings: 10000+ maximum active goroutines for 10m0s **Next steps** - More help interpreting this metric is available in the [dashboards reference](dashboards#embeddings-go-goroutines). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_go_goroutines" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*maximum go garbage collection duration
**Descriptions** - warning embeddings: 2s+ maximum go garbage collection duration **Next steps** - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-go-gc-duration-seconds). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "warning_embeddings_go_gc_duration_seconds" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*percentage pods available
**Descriptions** - critical embeddings: less than 90% percentage pods available for 10m0s **Next steps** - Determine if the pod was OOM killed using `kubectl describe pod embeddings` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`. - Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p embeddings`. - Learn more about the related dashboard panel in the [dashboards reference](dashboards#embeddings-pods-available-percentage). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: ```json "observability.silenceAlerts": [ "critical_embeddings_pods_available_percentage" ] ``` *Managed by the [Sourcegraph Cody team](https://handbook.sourcegraph.com/departments/engineering/teams/cody).*