Enhancing Code Completion for Rust in Cody

Although most LLMs are trained on corpora that include several programming languages, we often observe differential performance across languages, especially languages like Rust that are not well represented in popular training datasets. In this post, we share early results from our efforts to improve the performance of LLMs for code completion in such languages.

Developers use coding assistants like Cody to improve their workflows, relying on autocomplete to provide real-time code completions as a developer is typing in their IDE. As you start coding, or after you type a comment, Cody performs the following loop dozens of times per minute:

  • Look at the context around your open files, file history, and broader repository context

  • Stitch together a prompt and trigger a completion request to a large language model (LLM)

  • Post-process the LLM output, truncating multi-line completions and using tree-sitter based syntactic parsing for filtering bad suggestions

  • Displays the final completion suggestion to the user.

Here's our blog describing the lifecycle of the autocomplete request in detail.

A standard benchmark used to evaluate the performance of LLMs on code completions is the HumanEval benchmark, which measures the functional correctness of programs generated by LLMs from docstrings. The standard metric used on this benchmark is pass@k, a metric designed to evaluate the functional correctness of generated code samples. The pass@k metric is defined as the probability that at least one of the top k-generated code samples for a problem passes the unit tests. Here's how popular LLMs perform in a variety of languages. Note the difference in performance between Python versus Rust, Ruby, and Matlab:

Pass@1 Performance by Language

The figure on the left shows the pass@1 metric for proprietary models and the figure on the right presents results for OSS models. In both proprietary and OSS models, there is a drastic performance drop on the pass@1 metric between languages like Python and JavaScript versus Ruby, PHP, Rust, and Matlab. This is likely due to the relative representation of each language in the training data and the empirical complexity of the language's syntax.

An important consideration for us as we improve code completion quality for Cody is to bridge this performance gap across languages. Specifically, we ask the question: can we finetune an LLM to make it perform better code completions on a language of interest? Let's find out. But first, let's consider what metrics and trade-offs are important for any LLM to be useful for Cody's autocomplete feature.

Striking the Perfect Balance: Latency vs. Pass@1

When it comes to code completion, two metrics dominate our attention: latency and Pass@1. Latency measures the response time of the coding assistant, which is crucial for surfacing results before the next keystroke. Pass@1 assesses the accuracy of the generated code to pass predefined unit tests.

Let's look at a scatter plot comparing the latency and pass@1 metrics for a few popular LLMs for the Rust language. The scatterplot below plots the pass@1 (x-axis) and decreasing mean latency (y-axis) for a set of models. The top-right is best and the green region shows the tolerable latency for autocomplete.

Model Performance: Speed vs Pass@1 on Rust

We observe that GPT4-turbo and Claude-3-opus offer significantly better performance on pass@1 metric, but their latencies are prohibitively high—often as high as multiple seconds—for a latency-sensitive feature like autocomplete, this is a deal breaker. Ideally, we'd like to be within <500ms end-to-end to give developers a seamless experience using autocomplete. If we focus on the results for models that are within 1000ms latency budget, mixtral-8x7b, codellama-34b-instruct, and starcoder-16b emerge as promising candidate models to consider.

Given this, the question we consider then becomes: can we fine-tune an LLM to improve code completion quality for a language of our choice?

Fine-tuning a code completion model for Rust

Rust presents unique challenges for code completion tools due to its stringent constraints on safety in concurrency and memory management. As shown earlier, general-purpose models underperformed in Rust, struggling with the language's advanced syntax and semantics.

We use LoRA (Low-Rank Adaptation) for efficient and effective fine-tuning of the Mixtral 8x7b and Code Llama 34b LLMs. We developed a finetuning dataset of Rust repositories with permissive licenses, and simulated code completion requests on a random selection of Rust files within these repositories. We further applied a selection strategy to target functions that would result in a non-trivial completion and masked a meaningful section of the code in the containing file. We applied LoRA with a lora_rank of 16, which refers to the dimensionality of trainable matrices in low-rank adaptation fine-tuning, balancing model adaptability and computational efficiency in fine-tuning large language models, and a batch size of 32 with learning_rate: 0.0001.

In one of our experiments, we finetuned two base models on the same dataset: Mixtral 8x-7b and Code Llama 34b. The finetuned Mixtral showed a clear superiority in the benchmarks. The figure below compares our finetuned Mixtral with the base Mixtral and the StarCoder-16b model.

Model Performance: Pass@1 for Rust

For example, before fine-tuning, our models averaged a Pass@1 of around 0.28 for Rust; whereas the fine-tuned mixtral performance jumped to over 0.39. This leap is not just numerical but reflects a qualitative improvement in how the model understands and generates Rust code, as illustrated below.

Post Fine-tuning Model Performance: Speed vs Pass@1 on Rust

Showcasing improvements using concrete examples

To illustrate the effectiveness of our fine-tuned Mixtral model, let's look at a sample of five examples to understand where the fine-tuned model performs better.

Example 1: Given a HashMap, return true if all keys are strings in lower case or all keys are strings in upper case, else return false. The function should return false if the given HashMap is empty.

Solution from mixtral-8x7b base model:

use std::collections::HashMap;
 
/// Given a HashMap, return true if all keys are strings in lower 
/// case or all keys are strings in upper case, else return false.
/// The function should return false if the given HashMap is empty.
fn check_dict_case(dict: HashMap<String, String>) -> bool {
 
    let mut all_lower = true;
    let mut all_upper = true;
    for (key, _) in dict {
        if !key.is_ascii_lowercase() {
            all_lower = false;
        }
        if !key.is_ascii_uppercase() {
            all_upper = false;
        }
    }
    all_lower || all_upper
}
 
Error:
error[E0599]: no method named `is_ascii_lowercase` found for struct `String` in the current scope
  --> /tmp/tmpty1spnu1.rs:22:17
   |
22 |         if !key.is_ascii_lowercase() {
   |                 ^^^^^^^^^^^^^^^^^^ help: there is a method with a similar name: `to_ascii_lowercase`

Solution from finetuned mixtral-8x7b model:

use std::collections::HashMap;
 
/// Given a HashMap, return true if all keys are strings in lower 
/// case or all keys are strings in upper case, else return false.
/// The function should return false if the given HashMap is empty.
fn check_dict_case(dict: HashMap<String, String>) -> bool {
 
    if dict.is_empty() {
        return false;
    }
    let mut all_lower = true;
    let mut all_upper = true;
    for (k, _) in dict.iter() {
        if k.chars().all(|c| c.is_ascii_lowercase()) {
            all_upper = false;
        } else if k.chars().all(|c| c.is_ascii_uppercase()) {
            all_lower = false;
        } else {
            return false;
        }
    }
    all_lower || all_upper
}

In the solution from the base model, the function attempts to directly call the method is_ascii_lowercase() on the String type. However, no such method is available for the String type, which results in a compilation error. In the solution from the fine-tuned model, the function iterates over the keys of the HashMap using dict.iter() to access each key-value pair. It then checks each character of the key using k.chars() and applies the is_ascii_lowercase() and is_ascii_uppercase() methods to each character individually. This allows for correct checking of the case of each character in the key. Additionally, it also includes a check for an empty HashMap at the beginning of the function, which returns false if the HashMap is empty.

Example 2: Return the largest prime factor of n. Assume n > 1 and is not a prime.

Solution from mixtral-8x7b base model:

/// Return the largest prime factor of n. Assume n > 1 and is not a prime.
/// >>> largest_prime_factor(13195)
/// 29
/// >>> largest_prime_factor(2048)
/// 2
fn largest_prime_factor(n: isize) -> isize {
 
    let mut i = 2;
    while i * i <= n {
        if n % i == 0 {
            n /= i;
        } else {
            i += 1;
        }
    }
    n
}
 
fn main() {
    let candidate = largest_prime_factor;
    assert_eq!(candidate(15), 5);
    assert_eq!(candidate(27), 3);
    assert_eq!(candidate(63), 7);
    assert_eq!(candidate(330), 11);
    assert_eq!(candidate(13195), 29);
}
 
 
error[E0384]: cannot assign to immutable argument `n`
  --> /tmp/tmpl4tt2zr8.rs:11:13
   |
6  | fn largest_prime_factor(n: isize) -> isize {
   |                         - help: consider making this binding mutable: `mut n`
...
11 |             n /= i;
   |             ^^^^^^ cannot assign to immutable argument
 
error: aborting due to previous error
 
For more information about this error, try `rustc --explain E0384`.

Solution from finetuned mixtral-8x7b model:

/// Return the largest prime factor of n. Assume n > 1 and is not a prime.
/// >>> largest_prime_factor(13195)
/// 29
/// >>> largest_prime_factor(2048)
/// 2
fn largest_prime_factor(n: isize) -> isize {
 
    let mut n = n;
    let mut largest_factor = 0;
    while n % 2 == 0 {
        largest_factor = 2;
        n /= 2;
    }
    let mut i = 3;
    while i * i <= n {
        if n % i == 0 {
            largest_factor = i;
            n /= i;
        } else {
            i += 2;
        }
    }
    if n > 1 {
        largest_factor = n;
    }
    largest_factor
}
 
fn main() {
    let candidate = largest_prime_factor;
    assert_eq!(candidate(15), 5);
    assert_eq!(candidate(27), 3);
    assert_eq!(candidate(63), 7);
    assert_eq!(candidate(330), 11);
    assert_eq!(candidate(13195), 29);
}

The base model's solution for finding the largest prime factor of a number fails due to an attempt to modify an immutable function argument, resulting in a compilation error (error[E0384]: cannot assign to immutable argument 'n'). The finetuned model corrects this issue by introducing a mutable binding for n within the function (let mut n = n;), allowing modifications. Additionally, the finetuned model enhances the algorithm by tracking the largest factor found and efficiently removing factors of 2 before checking only odd numbers. This ensures a more accurate and efficient determination of the largest prime factor, addressing both the immutability error and logical completeness present in the base model's approach.

Example 3: You're a hungry rabbit, and you already have eaten a certain number of carrots, but now you need to eat more carrots to complete the day's meals. You should return a vector of [ total number of eaten carrots after your meals, the number of carrots left after your meals ] if there are not enough remaining carrots, you will eat all remaining carrots, but will still be hungry.

Solution from mixtral-8x7b base model:

/// You're a hungry rabbit, and you already have eaten a certain number of carrots,
/// but now you need to eat more carrots to complete the day's meals.
/// you should return a vector of [ total number of eaten carrots after your meals,
/// the number of carrots left after your meals ]
/// if there are not enough remaining carrots, you will eat all remaining carrots, but will still be hungry.
/// Example:
/// >>> eat(5, 6, 10)
/// vec![11, 4]
/// >>> eat(4, 8, 9)
/// vec![12, 1]
/// >>> eat(1, 10, 10)
/// vec![11, 0]
/// >>> eat(2, 11, 5)
/// vec![7, 0]
/// Variables:
/// @number : integer
/// the number of carrots that you have eaten.
/// @need : integer
/// the number of carrots that you need to eat.
/// @remaining : integer
/// the number of remaining carrots thet exist in stock
/// Constrain:
/// * 0 <= number <= 1000
/// * 0 <= need <= 1000
/// * 0 <= remaining <= 1000
/// Have fun :)
fn eat(number: isize, need: isize, remaining: isize) -> Vec<isize> {
 
    let mut result = vec![0, 0];
    let mut total = number + need;
    if total > remaining {
        result[0] = remaining;
        result[1] = total - remaining;
    } else {
        result[0] = total;
        result[1] = remaining - total;
    }
    result
}
 
fn main() {
    let candidate = eat;
    assert_eq!(candidate(5, 6, 10), vec![11, 4]);
    assert_eq!(candidate(4, 8, 9), vec![12, 1]);
    assert_eq!(candidate(1, 10, 10), vec![11, 0]);
    assert_eq!(candidate(2, 11, 5), vec![7, 0]);
    assert_eq!(candidate(4, 5, 7), vec![9, 2]);
    assert_eq!(candidate(4, 5, 1), vec![5, 0]);
}
 
 
thread 'main' panicked at /tmp/tmp_9oxwc8d.rs:43:5:
assertion `left == right` failed
  left: [10, 1]
 right: [11, 4]
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Solution from finetuned mixtral-8x7b model:

/// You're a hungry rabbit, and you already have eaten a certain number of carrots,
/// but now you need to eat more carrots to complete the day's meals.
/// you should return a vector of [ total number of eaten carrots after your meals,
/// the number of carrots left after your meals ]
/// if there are not enough remaining carrots, you will eat all remaining carrots, but will still be hungry.
/// Example:
/// >>> eat(5, 6, 10)
/// vec![11, 4]
/// >>> eat(4, 8, 9)
/// vec![12, 1]
/// >>> eat(1, 10, 10)
/// vec![11, 0]
/// >>> eat(2, 11, 5)
/// vec![7, 0]
/// Variables:
/// @number : integer
/// the number of carrots that you have eaten.
/// @need : integer
/// the number of carrots that you need to eat.
/// @remaining : integer
/// the number of remaining carrots thet exist in stock
/// Constrain:
/// * 0 <= number <= 1000
/// * 0 <= need <= 1000
/// * 0 <= remaining <= 1000
/// Have fun :)
fn eat(number: isize, need: isize, remaining: isize) -> Vec<isize> {
 
    let mut result = vec![0, 0];
    result[0] = number + need;
    result[1] = remaining - need;
    if result[1] < 0 {
        result[0] += result[1];
        result[1] = 0;
    }
    result
}
 
fn main() {
    let candidate = eat;
    assert_eq!(candidate(5, 6, 10), vec![11, 4]);
    assert_eq!(candidate(4, 8, 9), vec![12, 1]);
    assert_eq!(candidate(1, 10, 10), vec![11, 0]);
    assert_eq!(candidate(2, 11, 5), vec![7, 0]);
    assert_eq!(candidate(4, 5, 7), vec![9, 2]);
    assert_eq!(candidate(4, 5, 1), vec![5, 0]);
}

The base model's solution for calculating the total number of carrots eaten and the remaining carrots after meals contains a logical error in how it handles the calculation and assignment to the result vector. Specifically, it mismanages the situation where the total needed carrots exceed the remaining carrots. The base model incorrectly assigns result[0] to the remaining carrots and result[1] to the difference between total and remaining, leading to inaccurate results. This causes the assertion failures in tests where the expected total eaten carrots and remaining carrots do not match the output. In contrast, the finetuned model correctly handles this calculation by first setting result[0] to the sum of number and need, and result[1] to the difference between remaining and need. It then checks if result[1] is negative, indicating that the remaining carrots are insufficient to meet the need. If so, it adjusts result[0] by adding the negative result[1] (reducing the total eaten carrots accordingly) and sets result[1] to zero, accurately reflecting that all remaining carrots have been consumed. This approach ensures the result vector correctly represents the total eaten carrots and remaining carrots, passing all test cases without assertion errors.

In summary, the finetuned model provides a more robust solution by correctly adjusting for the cases where the needed carrots exceed the remaining stock, ensuring the output accurately reflects the total eaten and remaining carrots. This technical correction allows the finetuned model to produce correct results consistently across various test scenarios.

Example 4: This function takes a vector l and returns a vector l' such that l' is identical to l in the odd indicies, while its values at the even indicies are equal to the values of the even indicies of l, but sorted.

Solution from mixtral-8x7b base model:

/// This function takes a vector l and returns a vector l' such that
/// l' is identical to l in the odd indicies, while its values at the even indicies are equal
/// to the values of the even indicies of l, but sorted.
/// >>> sort_even(vec![1, 2, 3])
/// vec![1, 2, 3]
/// >>> sort_even(vec![5, 6, 3, 4])
/// vec![3, 6, 5, 4]
fn sort_even(l: Vec<isize>) -> Vec<isize> {
 
    let mut l = l;
    for i in (0..l.len()).step_by(2) {
        l.splice(i..i + 2, l[i..i + 2].sort());
    }
    l
}
 
fn main() {
    let candidate = sort_even;
    assert_eq!(candidate(vec![1, 2, 3]), vec![1, 2, 3]);
    assert_eq!(candidate(vec![5, 3, -5, 2, -3, 3, 9, 0, 123, 1, -10]), vec![-10, 3, -5, 2, -3, 3, 5, 0, 9, 1, 123]);
    assert_eq!(candidate(vec![5, 8, -12, 4, 23, 2, 3, 11, 12, -10]), vec![-12, 8, 3, 4, 5, 2, 12, 11, 23, -10]);
}
 
 
error[E0277]: `()` is not an iterator
  --> /tmp/tmpi1jmadus.rs:12:11
   |
12 |         l.splice(i..i + 2, l[i..i + 2].sort());
   |           ^^^^^^ `()` is not an iterator
   |
   = help: the trait `Iterator` is not implemented for `()`
note: required by a bound in `Splice`
  --> /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/alloc/src/vec/splice.rs:21:1
 
error: aborting due to previous error
 
For more information about this error, try `rustc --explain E0277`.

Solution from finetuned mixtral-8x7b model:

/// This function takes a vector l and returns a vector l' such that
/// l' is identical to l in the odd indicies, while its values at the even indicies are equal
/// to the values of the even indicies of l, but sorted.
/// >>> sort_even(vec![1, 2, 3])
/// vec![1, 2, 3]
/// >>> sort_even(vec![5, 6, 3, 4])
/// vec![3, 6, 5, 4]
fn sort_even(l: Vec<isize>) -> Vec<isize> {
 
    let mut l = l;
    let mut i = 0;
    while i < l.len() {
        if i % 2 == 0 {
            let mut j = i + 1;
            while j < l.len() {
                if j % 2 == 0 {
                    if l[i] > l[j] {
                        let temp = l[i];
                        l[i] = l[j];
                        l[j] = temp;
                    }
                }
                j += 1;
            }
        }
        i += 1;
    }
    l
}
 
fn main() {
    let candidate = sort_even;
    assert_eq!(candidate(vec![1, 2, 3]), vec![1, 2, 3]);
    assert_eq!(candidate(vec![5, 3, -5, 2, -3, 3, 9, 0, 123, 1, -10]), vec![-10, 3, -5, 2, -3, 3, 5, 0, 9, 1, 123]);
    assert_eq!(candidate(vec![5, 8, -12, 4, 23, 2, 3, 11, 12, -10]), vec![-12, 8, 3, 4, 5, 2, 12, 11, 23, -10]);
}

The base model's solution for sorting the values at the even indices of a vector contains a fundamental misunderstanding of how to modify and sort slices within Rust. Specifically, the use of l.splice(i..i + 2, l[i..i + 2].sort()); is incorrect because the sort() method returns (), which is not an iterator, leading to the error: "() is not an iterator." This mistake indicates a misunderstanding of Rust's slicing and sorting mechanisms. It is almost as if the model is pattern-matching against patterns in another language and naively interpolating these into Rust-like syntax.

In contrast, the finetuned model provides a correct implementation by explicitly sorting the elements at even indices using nested loops. The outer loop iterates through the indices of the vector, and the inner loop compares and swaps elements at even indices if they are out of order. This ensures that the even-indexed elements are sorted without altering the odd-indexed elements. The finetuned model's method is more verbose but correctly manipulates the vector by directly comparing and swapping elements, thus producing the correct sorted vector for even indices while leaving the odd indices unchanged.

In summary, the base model fails due to an incorrect use of slicing and sorting functions, resulting in a compilation error. The finetuned model correctly sorts the even-indexed elements by implementing a nested loop approach that directly compares and swaps elements, ensuring the even indices are sorted without affecting the odd indices. This solution passes all provided test cases, demonstrating its correctness and robustness.

Example 5: triples_sum_to_zero takes a vector of integers as an input. it returns true if there are three distinct elements in the vector that sum to zero, and false otherwise.

Solution from mixtral-8x7b base model:

/// triples_sum_to_zero takes a vector of integers as an input.
/// it returns true if there are three distinct elements in the vector that
/// sum to zero, and false otherwise.
/// >>> triples_sum_to_zero(vec![1, 3, 5, 0])
/// false
/// >>> triples_sum_to_zero(vec![1, 3, -2, 1])
/// true
/// >>> triples_sum_to_zero(vec![1, 2, 3, 7])
/// false
/// >>> triples_sum_to_zero(vec![2, 4, -5, 3, 9, 7])
/// true
/// >>> triples_sum_to_zero(vec![1])
/// false
fn triples_sum_to_zero(l: Vec<isize>) -> bool {
 
    let mut set = std::collections::HashSet::new();
    for i in l {
        if set.contains(&(-i)) {
            return true;
        }
        set.insert(i);
    }
    false
}
 
fn main() {
    let candidate = triples_sum_to_zero;
    assert_eq!(candidate(vec![1, 3, 5, 0]), false);
    assert_eq!(candidate(vec![1, 3, 5, -1]), false);
    assert_eq!(candidate(vec![1, 3, -2, 1]), true);
    assert_eq!(candidate(vec![1, 2, 3, 7]), false);
    assert_eq!(candidate(vec![1, 2, 5, 7]), false);
    assert_eq!(candidate(vec![2, 4, -5, 3, 9, 7]), true);
    assert_eq!(candidate(vec![1]), false);
    assert_eq!(candidate(vec![1, 3, 5, -100]), false);
    assert_eq!(candidate(vec![100, 3, 5, -100]), false);
}
 
 
thread 'main' panicked at /tmp/tmpsrj64vea.rs:29:5:
assertion `left == right` failed
  left: true
 right: false
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Solution from finetuned mixtral-8x7b model:

/// triples_sum_to_zero takes a vector of integers as an input.
/// it returns true if there are three distinct elements in the vector that
/// sum to zero, and false otherwise.
/// >>> triples_sum_to_zero(vec![1, 3, 5, 0])
/// false
/// >>> triples_sum_to_zero(vec![1, 3, -2, 1])
/// true
/// >>> triples_sum_to_zero(vec![1, 2, 3, 7])
/// false
/// >>> triples_sum_to_zero(vec![2, 4, -5, 3, 9, 7])
/// true
/// >>> triples_sum_to_zero(vec![1])
/// false
fn triples_sum_to_zero(l: Vec<isize>) -> bool {
 
    let mut l = l;
    l.sort();
    for i in 0..l.len() {
        let mut j = i + 1;
        let mut k = l.len() - 1;
        while j < k {
            if l[i] + l[j] + l[k] == 0 {
                return true;
            } else if l[i] + l[j] + l[k] < 0 {
                j += 1;
            } else {
                k -= 1;
            }
        }
    }
    false
}
 
fn main() {
    let candidate = triples_sum_to_zero;
    assert_eq!(candidate(vec![1, 3, 5, 0]), false);
    assert_eq!(candidate(vec![1, 3, 5, -1]), false);
    assert_eq!(candidate(vec![1, 3, -2, 1]), true);
    assert_eq!(candidate(vec![1, 2, 3, 7]), false);
    assert_eq!(candidate(vec![1, 2, 5, 7]), false);
    assert_eq!(candidate(vec![2, 4, -5, 3, 9, 7]), true);
    assert_eq!(candidate(vec![1]), false);
    assert_eq!(candidate(vec![1, 3, 5, -100]), false);
    assert_eq!(candidate(vec![100, 3, 5, -100]), false);
}
 

The base model's solution for determining if a vector contains three distinct elements that sum to zero is incorrect due to a flawed approach that uses a hash set to check for pairs that sum to a given element's negative counterpart. This method only checks for pairs summing to zero rather than triples and fails to correctly identify all valid combinations, leading to incorrect results and assertion failures. The finetuned model, however, implements a correct approach by first sorting the vector and then using a three-pointer technique to find the triplets. This involves iterating through the vector with an index i and using two additional pointers, j and k, to check the sum of elements at these positions. If the sum of the elements at i, j, and k equals zero, the function returns true. Otherwise, it adjusts the pointers based on whether the sum is less than or greater than zero. This method ensures that all possible triplets are checked efficiently.

In summary, the base model's error lies in its incorrect use of a hash set to find pairs rather than triplets, resulting in logical errors. The finetuned model correctly uses a sorted array and a three-pointer technique to accurately identify triplets summing to zero, demonstrating a more robust and effective solution. This correction ensures accurate results across various test cases.

Overall, looking at the above examples, the finetuned model offers advantages over the base model in solving coding problems accurately and efficiently. It handles errors and corrections effectively by using proper language constructs and correct logic, ensuring logical completeness by covering more edge cases and meeting problem requirements. The finetuned model employs efficient algorithms, enhancing performance and accuracy, as seen in examples like sorting arrays and using the three-pointer technique for triplet sum detection. Additionally, its solutions are clear and robust, leading to successful test case outcomes and fewer assertion errors. Overall, the finetuned model demonstrates an improved grasp of problem-solving techniques, providing more reliable and efficient solutions than the base model.

Looking ahead: enriching evals & improving other languages

The success we've seen with Rust is just the beginning. We are excited to share our progress on other languages in the coming weeks. An important part of improving code completions is developing evaluation suites that extend way beyond current human-eval benchmarks, and are more representative of real-world industrial coding experiences within complex code repositories. We have developed context-aware datasets and better metrics in this direction, and are excited to share more on this soon. We have rolled out our finetuned completions models to a subset of completion requests targeting the languages and request types that appear to benefit, and are observing increases in online metrics, which we intend to share in a follow-up post in the coming weeks.

We are thrilled about the improvements made and the potential for even greater enhancements in the future. If you're as excited as we are, we invite you to try Cody in your projects by installing it for VS Code or IntelliJ at cody.dev. Your feedback is invaluable as we continue to refine and expand Cody's capabilities. Together, let's push the boundaries of what coding assistants can do and transform the software development landscape.

Get Cody, the AI coding assistant

Cody makes it easy to write, fix, and maintain code.