Cycle Time in Software Development: A Complete Guide
Introduction
Cycle time is one of the most critical metrics in software development, yet many teams either don't measure it or confuse it with other delivery metrics. At its core, cycle time measures a straightforward metric: the elapsed time from when a developer starts coding to when the code reaches production. Lean manufacturing borrowed this concept decades ago to track the efficiency with which raw materials were turned into finished products. In software, it reveals how efficiently your team turns requirements into deployed, working code.
The real power of cycle time is diagnostic. While other metrics obscure what's happening in your pipeline, cycle time cuts through the noise and reveals every bottleneck. Is code review the problem? CI/CD? Testing? Deployment gates? Break it down by phase, and you know exactly where work stalls. For engineering leaders, it's less a scorecard than an X-ray of your development process. This guide covers how to measure it accurately, what good looks like in different contexts, and practical strategies to reduce it.
What Is Cycle Time in Software Development?
Cycle time measures the duration from when a developer starts working on a change to when it's live in production. Everything counts: creating a branch, writing code, submitting for review, waiting for approval, running through CI/CD, and deploying. The concept comes from lean manufacturing, where it tracks the time to transform raw materials into a finished product. In software, we're transforming requirements into deployed code.
Here's what matters: cycle time includes everything from first commit to production, but it excludes planning and prioritization. It starts when coding begins, not when someone thought about the problem or created a ticket. Every delay counts: PR approvals, CI/CD runs, and deployment gates. This is what makes it actionable (it captures every friction point between "started working" and "users can see it").
Lean thinking says the goal is to move work through the system with minimal friction, and high cycle time almost always means work is sitting in queues rather than being actively worked on. It's waiting for a reviewer who's busy with other tasks. It's waiting for a build that takes too long. It's waiting for someone to click "deploy." By measuring cycle time and tracking it over time, you can see whether your process changes are actually working, or just shifting the bottleneck to a different queue.

Cycle Time vs. Lead Time: What's the Difference?
Lead time and cycle time are frequently confused, and the distinction matters because it changes what you can optimize and who's responsible for improvement.
Lead time starts when a request or requirement is first created and ends when code is in production. It includes everything before development begins: planning discussions, the time a ticket sits in the backlog, and prioritization delays. Cycle time starts when someone actually begins working on it and ends at deployment. The practical implication is that your engineering team can directly control cycle time through process improvements, while lead time also reflects decisions made by product, project management, and leadership about prioritization and planning.
Here's a concrete example: a customer reports a bug on Monday morning. The bug sits in your backlog until Wednesday, when someone triages it. A developer picks it up Thursday morning, and the fix reaches production Friday afternoon. Lead time is five days (Monday to Friday). Cycle time is roughly 1.5 days (Thursday morning to Friday afternoon). Both numbers are useful, but they tell different stories. Lead time reflects your overall responsiveness to customers and stakeholders. Cycle time reflects the efficiency of your engineering process once work begins.

The gap between these two tells you where to look. If cycle time is one day but lead time is three weeks, your problem isn't engineering, it's planning and prioritization. If they're close, work gets picked up immediately, which could mean you're responsive or just too reactive to plan proactively.
How to Calculate Cycle Time
Track timestamps at each phase as work moves through your development workflow. Your issue tracker, version control, CI/CD, and deployment tools all provide this data.
The Cycle Time Formula
Cycle Time = Time Code Reaches Production − Time Developer Started Coding
The formula is simple, but the real value comes from breaking it down by phase and by work type. Most teams don't have a single cycle time (they have different cycle times for bug fixes, features, and hotfixes, and different timings for the coding, review, testing, and deployment phases within each). Tracking these separately is where you discover the actual bottleneck hiding inside your aggregate number.
Breaking Down the Phases
Cycle time breaks down into distinct phases, each with its own optimization opportunities.
Coding time is how long the developer spends writing and locally testing their change, usually a few hours to a few days, depending on complexity. Review time starts when the pull request is created and ends when it's approved; this is often the longest phase and the one where most teams find their biggest improvement opportunities. CI/CD time covers automated tests and builds, and while individual runs may seem fast, a slow pipeline compounds across every change the team makes. Deployment time is the final step, how long it takes for approved, tested code to reach production. Continuous deployment teams see this measured in seconds; teams with manual approval gates often see hours or days.
The breakdown is essential because aggregate cycle time can be misleading. If your total cycle time is three days, you need to know whether that's one day of coding, one day of review, and one day of deployment, or three days stuck waiting for someone to approve a pull request. The former suggests multiple areas to optimize; the latter points to one clear bottleneck. Document timestamps at each transition point: first commit, PR creation, review approval, CI completion, production deployment. Modern development platforms (like GitHub, GitLab, and Jira) capture much of this automatically.
What Is a Good Cycle Time?
Context matters. Your domain, team size, code quality, and technical debt all affect benchmarks. But reference points show you whether you're in a reasonable range and where improvements will help most.
Industry Benchmarks
The State of DevOps Report from Google Cloud provides the most widely cited benchmarks. Elite-performing teams deploy multiple times per day with average cycle times measured in hours. High-performing teams typically achieve cycle times under one day. Mid-tier teams range from one week to one month, and low-performing teams often exceed one month.
A fintech company dealing with regulatory requirements will have longer cycle times than a startup shipping a consumer app. Infrastructure work takes longer than features. Hotfixes should move faster than planned features. The right question isn't "is our cycle time good?" but "is it improving, and do we know where work stalls?"
DORA Performance Levels
The DORA metrics framework categorizes teams into four performance tiers based on deployment frequency, lead time, and related signals. These tiers correlate closely with cycle time.
Elite performers didn't stumble into short cycle times. They systematically eliminated handoffs: reviewers are responsive and have the context they need, CI/CD pipelines are fast because the team invested in infrastructure, and deployment is automated and trusted. Each is a deliberate investment, not culture alone.
Why Cycle Time Matters for Engineering Teams
Cycle time matters because it's one of the few metrics your engineering team can directly influence through process changes and tooling decisions. Some delivery metrics depend on factors outside engineering's control, such as stakeholder decisions, market timing, and budget constraints. Cycle time reflects your team's engineering capabilities and processes. Shorter cycle time means faster feedback and data-driven decisions. Ship a feature in days instead of months, and you're responding to real user behavior, not assumptions. You iterate faster, compete better, and validate hypotheses quicker. As codebases grow and AI tools accelerate code volume, maintaining short cycles takes intentional effort, but the payoff is real. For teams, shorter cycle time boosts morale. Developers hate watching code sit in review or deployment queues. When work moves fast, they see their changes reach users sooner, stay focused on one task at a time, and avoid the context-switching pain of juggling multiple waiting branches. Cycle time is also an early warning. When it creeps up, something's wrong: code quality declining, tests slowing, deployment getting complex, or the team spread thin. Catch it early, and address it before the slowdown becomes structural.
How to Reduce Cycle Time
This isn't about working faster or cutting corners. It's about eliminating the friction at each phase that makes work slow.
Identify and Remove Bottlenecks
Start by seeing where time actually goes. Take ten to twenty recent deployments and calculate cycle time by phase. You'll almost certainly find one or two phases dominate: code review averaging four days, CI pipelines taking two hours, and approval gates adding latency. That's your bottleneck. Fix it first.
The temptation is to optimize everything equally, but this is almost always a mistake. If your CI pipeline takes one hour and code review takes four days, cutting CI time by 50% saves 30 minutes per change. Cutting review time by 50% saves two days per change. Focus on the biggest bottleneck first, fix it, and then move to the next one.
Identifying bottlenecks requires visibility. Use your existing tools: Jira, GitHub, GitLab, or specialized metrics platforms. Pull a month of data to separate real trends from noise.
Optimize Code Review Processes
Code review is where cycle time dies in most organizations. Reviews matter for quality and knowledge sharing, but the friction surrounding them kills speed. The problem isn't the review itself; it's everything else.
Start with expectations. Establish a team norm that code reviews get turned around within a few hours, not days. For distributed teams across time zones, this requires some intentional structure and review rotations where one person is the designated reviewer each day, or explicit SLAs on review turnaround that the team tracks and discusses in retrospectives.
Make reviews smaller. Large PRs take exponentially longer and invite rounds of back-and-forth. Target PRs reviewable in under 15 minutes. Anything longer should be split into independently reviewable pieces.
Third, and most impactful: give reviewers the context they need. When reviewers don't understand the surrounding codebase, reviews stall. They ask more questions, request more changes, and hesitate to approve because they can't see downstream effects. In large, multi-repository codebases, a single change can ripple across services. Sourcegraph's code intelligence lets reviewers navigate across repos, trace dependencies, and see the full impact without leaving their review. Instant answers to "what else does this affect?" and "how is this pattern used elsewhere?" speed reviews and catch real issues. Booking.com's engineering team reported measurable improvements in productivity and time savings after giving their teams this visibility.
Improve CI/CD Pipelines
Pipelines should complete in minutes, not hours. If builds regularly exceed 30 minutes, that's worth fixing especially because pipeline time compounds across every change, every day.
Start by profiling what takes the most time. Are they unit tests? Integration tests? Container builds? Staging deployments? Once you know, optimize accordingly. Common improvements include parallelizing test execution, caching dependencies between runs, using faster compute infrastructure, and removing redundant or low-value checks.
Restructure into stages: a fast-feedback stage that catches obvious issues within minutes (linting, unit tests, compilation) and a thorough stage with slower but comprehensive checks (integration tests, security scans, benchmarks). Developers get a quick signal while deeper validation runs in parallel. Don't gate everything on the slowest check.
Reliability matters as much as speed. Flaky tests that randomly fail kill trust. Developers start ignoring failures or verifying manually, defeating automation. Track flaky tests and fix or quarantine them aggressively.
Reduce Work-in-Progress (WIP)
High WIP creates invisible delays. A developer with four branches in flight moves none of them quickly. Context switching between tasks kills focus. Research shows that each additional concurrent task reduces throughput on all of them.
Set WIP limits explicitly: per developer ("one task at a time") or per team ("max N active tasks"). It sounds risky, but it improves cycle time. Fewer competing tasks mean each piece moves faster. WIP limits also force teams to address blockers instead of starting new work, driving systemic improvements.
Tools for Measuring Cycle Time
You don't need specialized tools to start. Your existing infrastructure captures most of the data. But how you aggregate and analyze matters.
Version control (GitHub, GitLab) records commits, PR opens, and merges. Issue tracking (Jira) records when work starts and links code changes. CI/CD platforms (Jenkins, GitHub Actions, GitLab CI) log build times and test results. Deployment tools record production timing. Combine these timestamps for the end-to-end cycle time by phase.
For automated measurement, platforms like Velocity, LinearB, Haystack, and DORA dashboards pull data and calculate cycle time automatically with trend analysis and team breakdowns.
Where these platforms show you how long things take, Sourcegraph helps you understand why. When cycle time reveals a review bottleneck, Sourcegraph's code intelligence helps reviewers dig into causes. Trace how a change ripples across repositories. Navigate unfamiliar code to understand context. See the downstream dependencies the diff hides. Instead of asking colleagues or cloning repos, reviewers get visibility to approve confidently. Deep Search extends this further with AI-powered natural-language questions about the codebase. Understanding the "why" behind your metrics turns measurement into actual improvement.
Getting started? Calculate cycle time manually from Jira and GitHub for a couple of weeks. Identify your worst bottleneck, improve it, then invest in automated tools once you've seen the value.
Conclusion
Cycle time is the metric that actually tells you something about your engineering process. It's not vanity or surveillance, it's a diagnostic. Long cycle time? Something specific is creating friction, and you can fix it. Short cycle time? You've optimized the workflow that matters most.
Not measuring? Start this week. Take your last ten deployments, document timestamps at each phase, calculate totals, and per-phase breakdowns. You'll see where work stalls. Pick the biggest bottleneck, form a hypothesis, implement a change, and measure again.
Elite teams don't restructure or overhaul. They improve continuously, driven by data. Teams that sustain short cycles at scale invest in codebase understanding alongside pipeline measurement. Knowing that code review is slow is useful. Knowing why it's slow and having tools to fix it moves the needle.
Every team can improve. It's not harder work or headcount; it's removing friction from your process. Whether you're optimizing from a high baseline or just starting to measure, improvements are usually closer than you think. In markets where speed matters, a 20% reduction in cycle time is a real competitive advantage.
Want to understand the code behind your bottlenecks? Try Sourcegraph Code Search to see how cross-repository intelligence helps teams move faster, or schedule a demo to see how enterprise teams cut cycle time at scale.
.avif)