Imagine it’s Monday morning, and as a CTO, you’re starting your week reviewing the progress of your development teams. You notice that despite the overtime and extra effort, your team’s delivery pace is not meeting market demands.
Features that should take weeks to deploy are dragging into months. Moreover, each release seems to introduce as many bugs as improvements, leading to frustrated clients and a demoralized team.
You’re not alone in this struggle. Many tech leaders face these challenges, watching as their teams grapple with inefficiencies that seem to stem from invisible roots.
The question then becomes, how can you turn the tide? How do you pinpoint the bottlenecks in your development process and empower your team to deliver faster, higher quality results without burning out?
In this article, I will use some of these specialized software packages such as Jellyfish, LinearB and Pluralsight Flow as examples. They have a lot in common and are very valuable to any team using them.
But they are also a bit different, so each company needs to find its own best fit. Performance management explains team dynamics as a whole, but let’s take a closer look at what it really means, what you’re really measuring and how it can help you as a business.
Some software on the market also shows statistics related to the company as a whole and tries to shed some light on this segment as well.
DORA (DevOps Research and Assessment) metrics provide a data-driven framework for measuring software delivery performance and operational efficiency. These four key metrics have become the industry standard for evaluating DevOps effectiveness and identifying areas for improvement in your development lifecycle.
For example, using these metrics, you may discover that while you have a high deployment frequency, you also have a high change failure rate. This indicates a need to focus on improving testing and quality assurance processes.
DORA metrics is an integral part of all the solutions on the market, so as a user you can see cycle time, pickup time, review time and deployment time. It is easy to identify weaknesses in your CI/CD flow or slow response times from team leads in certain situations.
Adding automatic code acceptance criteria after a certain percentage of code coverage has been achieved is a necessary step in getting the best out of your team. Linking notifications within your Slack is also a good idea to speed up response times in certain situations.
Every team in a company can agree on 2 or 3 points where notifications are welcome. Adding some of these metrics to your company’s OKRs is also a good practice.
Code coverage is the amount of code covered by tests. So, if the code sent to a team lead for acceptance has, let’s say, 80% of the code covered by tests, it should be automatically accepted.
It may be interesting for some team leads to be notified of this, in the sense of the notifications in the Slack workspace mentioned above. Writing all kinds of tests is an essential part of skill matrices, all your code should be covered by appropriate tests.
Code churn or rework metrics will be more indicative of developer seniority and the overall quality of the code previously written. Old code is code older than X.
Lower numbers indicate less refactoring as a result of that seniority, while more frequent occurrences of higher code churn indicate more junior developer skills. Even code that is only a few days old is often refactored?
Don’t forget to use retrospectives with your team to reflect on your team goals for code coverage and code churn.
The first good practice is to import historical event data from Jira and GitLab, so be careful to ask your software vendor to connect that history. Why do this?
First of all, you need some data to compare with, this period against this period. These 2 sprints against these 2 sprints in this and that time period. This team against that team, this developer against that developer.
Past data is important.
Your own team will also remember some interesting sprints or refactorings and know the real context of the situation. This will explain to them what really happened in terms of team metrics.
Second, you may have missed some of your agile transformation, so this past data will point you to tasks/sprint/JIRA/ or daily/retrospective/planning issues that may still exist within your team. Adding your history from JIRA and GitLab is a must.
Third, invite your whole team in to see their team performance, metrics, and comparisons. Agile and DevOps transformations are often completely misunderstood, and these kinds of solutions are made for you to complete them.
More importantly, if you use them over time, you will be able to avoid making the same mistakes again. People come and go within an IT company, but the culture of high quality coding remains intact. In the long run, you can see why it is so valuable.
Each solution offers a slightly different approach depending on your needs. In my opinion, Pluralsight Flow is more agile in its structure of links and statistics. Best for younger teams and companies.
Yellyfish is more geared towards more mature teams with a lot of different statistics covered, while LinearB is somewhere in the middle of the two. Integration process takes a couple of weeks.
It takes time to integrate with JIRA and GitLab and the software of your choice. Adding your histories from both is a separate process and adds to this.
A pro tip would be to ensure data integrity during the import process. The onboarding customer support for all 3 software packages are highly trained and know how to read your team statistics.
Performance tracking gains significant value when it provides teams with meaningful statistics to explain specific situations and challenges. By implementing these tools, you can build a more transparent and inclusive culture where data drives improvement rather than criticism.
Importantly, this approach differs from purely metrics-driven management. The ultimate goal remains improving team collaboration, productivity, and product quality through better visibility and understanding.
Additionally, sharing your team’s performance data with AI assistants for feedback represents an emerging option, making the export capabilities of your chosen solution increasingly important for future applications.
Implementing the right metrics and software tools can transform your IT team’s performance by providing visibility into processes, identifying bottlenecks, and fostering a culture of continuous improvement.
Remember that the goal isn’t to simply measure performance but to use these insights to create a more collaborative, transparent, and efficient development environment that benefits both your team and your business outcomes.