Engineering Benchmarks to Identify Process Improvements

Benchmarking Engineering

Why Benchmarks?

Without data, you're just another engineering leader with an opinion of how your engineering team is performing. Benchmarking Engineering processes and comparing elite teams’ processes with yours helps you form data-backed insights.

Looking at data is awesome, metrics help you visualize your team’s performance. But the first questions you get after looking at metrics are
1

What metrics are good and which are bad?

2

What are other engineering teams doing?

3

What should I change to Improve?

Benchmarking engineering processes of high-performing teams help you uncover insights and drive a transformation within your teams toward excellence.
Metrics + Benchmarks = Actionable Insights
Benchmarking Engineering Processes with AnalyticsVerse
Quote vector

“Benchmarking Engineering Processes helps you put Objectivity to Productivity, and quickly Identify what are your teams Strengths and what needs Improvement.”

Engineering Benchmarks

Pull Requests Merge Time

Time it takes for a pull request to be merged, that is from the time it is created to merged to the target branch. Faster merging leads to delivering customer value faster.

< 2

days

Pull Request Size

Total number of code changes that are getting reviewed and merged in a pull request. Smaller pull requests lead to faster review and higher bug discovery efficiency.

< 400

Lines of code

Pull Request Reviews

Pull requests are considered as reviewed when the changes have been verified by a peer. Code reviews are the first and most effective line of action to reduce production defects.

> 70%

PRs should have either a review or approval

Making Progress vs Keeping the Lights On

Making Progress - Defined as bigger initiative items, delivering customer value, enhancing your team capabilities. Keeping the Lights On - Operational tasks, production support, and bug fixes, others not defined as making progress

70 / 30

70% making progress effort

Review Workload Distribution

Measures how well the code review workload is distributed amongst a development team. Identified by the percentage of a development team that has meaningful involvement in the code review process.

> 60%

of team reviewing code

Sprint Scope Completion

The percent of completed tasks from the ones that were taken up at the start of the sprint.

> 75%

Scope completion

Sprint Duration

Sprint duration is the time from start to end of a sprint. Time periods in which a scrum team plans, develops, delivers, and iterates. Too small increases overhead, too large means you are iterating slower.

≤ 3

Weeks long Sprint

Parallel Sprints

Parallel sprints are when one scrum team is working on multiple sprints ongoing together on Jira.

Shouldn’t exist

Developer Meeting Time

Percent of working hours spent by developers in meetings. A lower time indicates higher uninterrupted time for development, which means higher developer productivity.

< 30%

of time spent in meetings

Linking of Commits

Commit linking refers to the practice of linking commits to the related Jira tickets by mentioning a ticket number/task id with every commit or pull request. Helps in improving visibility and driving accountability.

> 70%

of commits should be linked

Work In Progress (WIP) tickets

The number of in-progress tasks that a developer is working on parallelly. High work-in-progress (WIP) increases context switching, which in turn reduces productivity.

≤ 3

WIP tickets

Bug Cycle Time

The time it takes for a bug to be completed from the time it's prioritized and worked upon. A smaller bug cycle time is indicative of the agility of a team and quick turnaround time in case of issues.

< 2

days

Knowledge Dependency

When a part of the codebase is uniquely known to a developer who is either currently present or has left the team. Lower knowledge distribution leads to knowledge silos and creates hard dependencies.

≯  15%

Unique codebase

Onboarding Time

Onboarding time refers to the time it takes for new engineers to onboard on a team and start making meaningful contributions.

< 30

days

Tenure

Tenure refers to the amount of time a person works for an organization. The average tenure is ~16 months, which makes you fall in the last qaurtile if tenure is less than a year.

≮  1

year

CI Build Time

The time it takes to run a CI build step, helps you get feedback on whether a change you committed can be successfully integrated or not. Higher CI build times leads to higher developer wait times, which leads to frequent context switching and affects the efficiency and flow.

< 20

minutes

Deployment Frequency (DORA)

The frequency with which your team is deploying to production or delivering customer value.

≥ 1

per day

Change Lead Time (DORA)

The time it takes for a change to be experienced by your end-user from the time it was made.

1 day - 1 week

Importance And Actions to Improve

Benchmark

Why this Benchmark matters?

Actions you can Take to Improve

Pull Request Merge Time
Why this Benchmark matters?
1. Leads to faster delivery.
2. Positive signs of communication & collaboration within a team.
3. Increases stability and agility
Actions you can Take to Improve
1. Move away from offline reviews, and track review tasks centrally over a tool.
2. Get notified for longer PRs
3. Automate style checks and standards
4. Create checklists and templates for PR description.
Pull Request Size
Why this Benchmark matters?
1. The efficiency of bug discovery decreases with an increase in size.
2. Smaller PR = Effective & Faster reviews = Faster delivery & Lesser Bugs.
3. The risk of going wrong increases.
Actions you can Take to Improve
1. Educate your team about importance.
2. One PR = One responsibility
3. CI checks to fail for larger pull requests.
4. Use feature flags
Pull Request Reviews
Why this Benchmark matters?
1. The most effective way to reduce production bugs is code review.
2. Self-merging code is a common anti-pattern leading to higher bugs
Actions you can Take to Improve
1. Protected branches and rules
2. Notification when un-reviewed PRs get merged.
3. Checks in pipeline steps
Making Progress vs Keeping the Lights On
Why this Benchmark matters?
1. Ideology to view effort as two categories crucial.
2. Enables you to take action when your support overhead increases.
3. Connects engineering to customers.
4. Impacts team satisfaction.
Actions you can Take to Improve
1. Awareness about viewing efforts as two major categories.
2. Initiate conversations, ideate, and solve problems when beyond acceptable limits.
3. Visualize efforts in weeks/sprints/quarters as these two categories.
Review Workload Distribution
Why this Benchmark matters?
1. Peer code review distributes knowledge and is faster.
2. Shared responsibility across the team.
3. Increases review effectiveness
Actions you can Take to Improve
1. Educate and enable peer code review
2. Measure and automate to notify the team.
3. Introduce new developers to code review early on.
Sprint Scope Completion
Why this Benchmark matters?
1. Better predictability over delivery timelines.
2. Efficiency of the planning process and fewer scope changes.
3. Higher completion = Teams with higher satisfaction & motivation.
Actions you can Take to Improve
1. Start with better planning.
2. Account for your capacity.
3. Reduce scope changes, and get notified beyond a limit.
4. Avoid last-minute surprises by measuring progress in standups.
Sprint Duration
Why this Benchmark matters?
1. Core sentiment of agile is to enable faster feedback cycles.
2. Late discovery of gaps between engineering and product.
3. Planning for longer durations is generally abstract and leads to blanket timelines.
Actions you can Take to Improve
1. Change your team’s sprint duration to be less than 3 weeks.
Parallel Sprints
Why this Benchmark matters?
1. Parallel sprints = Complete Confusion
2. Delivering value in two cycles is not possible.
3. Added overhead of scrum activities.
4. Developer productivity gets affected due to context switching.
Actions you can Take to Improve
1. Reorganize your team and Jira boards to have 1 active sprint for a scrum team.
Developer Meeting Time
Why this Benchmark matters?
1. Meeting and development work compete for the finite resource “time”.
2. Switching back to a task after a meeting takes
 > 20 minutes.
3. Decreases efficiency and flow with which a developer can work.
4. Higher organization size = Higher time a developer spends in meetings.
Actions you can Take to Improve
1. Mandate meeting agendas. Reject invites without one. Figure out which ones can be an email / chat thread.
2. Reduce fragmented meetings. Group them if possible.
3. Experiment with no meeting days/time slots.
Linking of Commits
Why this Benchmark matters?
1. Visibility and accountability
2. Better readability of commit history
3. Understand effort investment
Actions you can Take to Improve
1. PR automation
2. Pre-commit hooks
3. Gamify and create leaderboards
Work In Progress (WIP) tickets
Why this Benchmark matters?
1. Higher parallel tasks = Higher context switching. 40% productivity drop with 3 parallel tasks.
2. Driving tasks to completion drives value for the end user.
3. Leads to longer lead times and developer burnout.
Actions you can Take to Improve
1. Educate your team on the importance of WIP tickets.
2. Establish an acceptable WIP limit.
3. Set up automation to notify when these limits are breached.
Bug Cycle Time
Why this Benchmark matters?
1. Higher agility and quick TAT for bugs, reducing customer impact.
2. Allows you to invest higher effort in new features.
3. Higher times lead to an ever increasing backlog of bugs.
4. Indicative of systems with lower complexity
Actions you can Take to Improve
1. Improve tooling for better logging, observability, and error reporting.
2. Simpler architectures with better code documentation.
3. Initiate conversations to discover inherent architectural flaws.
Knowledge Dependency
Why this Benchmark matters?
1. Knowledge silos lead to higher dependency on individuals.
2. Bottlenecks in your delivery lifecycle.
3. Distribution of support workload is difficult.
4. Hesitation to change certain parts of the codebase.
Actions you can Take to Improve
1. Distribute future changes efficiently.
2. Knowledge sharing sessions - 15 mins a week with code walkthroughs of new features.
3. Establish a peer code review process.
Onboarding Time
Why this Benchmark matters?
1. The time and cost of hiring a new developer increases for an organization.
2. The first few weeks help in setting the expectations both ways between a developer and an organization.
3. A standard onboarding process leads to 54% more new-hire productivity. Higher times are indicative of no standard process.
Actions you can Take to Improve
1. Standardize the onboarding process.
2. Create internal documentation and wikis around tools and internal processes.
3. Assign buddies
4. Gather feedback on the onboarding process to improve.
Tenure
Why this Benchmark matters?
1. Tenure less than 12 months is the lower 25 percentile.
2. Indicative of leadership/culture problems.
3. Indicates satisfaction with the type of work a team is involved in.
Actions you can Take to Improve
1. Identify true reasons leading to this.
2. Design motivating and meaningful jobs, best leaders create jobs around people not the other way around.
3. Build a culture of trust that encourages an environment of real feedback.
CI Build time
Why this Benchmark matters?
1. Longer developer wait times = Higher interruptions and context switching = Low productivity and Higher burnout.
2. Teams will try to get feedback less frequently making integration even more difficult.
Actions you can Take to Improve
1. Build step optimizations like caching, using previously built artifacts.
2. Modular builds to build only what's changed.
3. Upscale your build infra.
4. Invest in better build and orchestration tools.
Deployment Frequency & CLT (DORA)
Why this Benchmark matters?
1. Backed by research of 33k+ professionals spanning 8 years.
2. Correlate engineering excellence with organization performance.
3. Right balance between speed and stability.
4. Great place to start
Actions you can Take to Improve
Improving DORA metrics comes with better engineering practices like
1. Smaller pull requests
2. Loosely coupled architecture
3. CI/CD best practices.
4. A “generative” engineering culture.
5. Feature Flags

Why Trust These Benchmarks?

These benchmarks are based on well accepted frameworks like DORA and SPACE metrics and are also verified and enhanced by analyzing data of development processes across hundreds of engineering teams.
Commits Analyzed
9.8M
Pull requests Analyzed
2.5M
Sprints Analyzed
123k
Total deployments
1.4M
Developers data analyzed
25k

How to Drive Value Using Benchmarks?

Benchmarking Engineering processes is often the first step in identifying areas that need improvement. Elite teams share a common trait: a commitment to continuous improvement in pursuit of excellence. Optimizing processes by analyzing metrics, setting benchmarks, and taking targeted actions is the most effective path to achieving this goal.

Understand the Why?

Understand the metric and benchmark. And most importantly align with the why and the importance behind benchmarking engineering process to understand its significance and impact on your team’s productivity.

Take Action

“Knowledge is only potential power but action is power”. Understand the sentiment of benchmarks, and actions for improvements and experiment to figure out what works best for you.

Get your Team Onboard

Driving value is only possible when your entire engineering team is aware and educated on the benefits of this, and is on board supporting this change.

Identify and Drive Change

Your biggest win is not when you read this till the last word. But it is when you drive even one single change for your engineering team.
AnalyticsVerse is a platform that helps you in benchmarking engineering process.
If you want to visualize this for your team, give us a try here.

How High-Performing Teams Do Engineering

“Benchmarks helped us quickly identify what we as a team are doing well and where we should improve.

The best part about this resource is it just doesn't state the obvious but goes deeper and gives you practical tips to improve”
Vivek Chhikara
Vivek Chhikara | Partner Engineering
Protium Logo Black
Download Benchmarks E-book
Benchmarks Ebook Cover Page

Don't Take our Word for it.
See AnalyticsVerse in Action for Yourself

Start Your Free Trial
No Credit Card Required
Upto 28 days free trial
Upto 1 year historical data
All features included