Technical Project Management Lead at Virtual Health
With 12 years in IT, he has delivered solutions for global brands like Skype, Microsoft, and Swedbank. Now at Virtual Health, Igor is revolutionizing US healthcare. He has launched startups, streamlined project management, and cut costs through data analysis.
Problem Definition
Our development department comprised approximately 70 individuals spread across two continents. Each team consisted of at most 10 people, one team lead, and a dedicated tech project manager, which is a fairly standard setting.
Project managers reported progress by gathering data from team leads. Senior management had several concerns regarding the work of the technical department, being the most costly for the company’s budget, which raised the following questions:
– How effectively are the teams working?
– Do we have sufficient personnel?
– Why do certain tasks take so much time?
– How accurate is the planning?
We used to employ standard monitoring tools for high-level work status tracking — dashboards that displayed a histogram of status distribution for all tasks. We could see how many tasks were in the backlog, in progress, under review, or in testing. However, this data wasn’t sufficient to understand how much time each stage of the process took. Though we knew the number of tasks, we still lacked information on time or greater granularity. The statuses were high-level, and at that point, it was difficult to observe trends in how the queue changed over time. Whether it was consistently long or temporary.
Research
Perhaps it was worth paying attention to products like BoostDev, Waydev, and LinearB; they offer a good set of metrics out of the box. However, armed with Excel, the YouTrack API, and Google’s web interface, we automated the export of all tasks from YouTrack daily.
We added information about teams and individuals and began analysing how much time tasks spent in various statuses – Todo, In Progress, Ready For Review, In Review, In QA. Additionally, we analysed how much time it took to transition between statuses, allowing us to identify bottlenecks in the process.
A couple of insights we gained from analysing dashboards based on time and teams:
– Review time takes 80 per cent of development time.
– In half of the teams, significant bottlenecks occurred during the review. Specifically, the time it took for a task to be picked up for review by a reviewer almost equalled the development time. There was a clear lack of people who could review code within the team.
– The third bottleneck in the process was the testing stage. Depending on priorities, tasks waited for testing for 3 to 27 days (3 for critical priority).
– A large number of transitions of tasks from the In Review stage to the In Progress stage sometimes raised questions about the review stage – why is code needing so much rework so frequently? Especially since this metric directly correlated with review time.
– Reviewers often conducted testing instead of the QA team.
– Some tasks of the programmers took around a month to complete – these were very large tasks that required decomposition for less risky product changes.
– It was evident that some teams were truly overloaded with work, while others were not.
By analysing this data, we realised that some processes, as well as resource strategies, appeared to be inefficient.
Implementing New Routines
Now that we have identified the bottlenecks it was time to implement changes according to our research results. Here is a list of routines and practices we have altered.
– Implemented new code review guidelines, replacing them with more effective checklists to ensure uniform requirements for programmers and reviewers, leading to a reduction in code review time.
– Introduced SLAs for the testing team and added additional testers to address bottlenecks, demonstrating the need for hiring additional resources.
– Expanded the team with more code reviewers, eliminating this bottleneck, and redistributed team members to the teams which historically tended to have higher workloads.
Results
As you can see we collected relatively simple metrics and already this enabled us to observe the process in numbers and accelerate it by an average of 30 percent. We also gained a tool for understanding the need for additional resources.
Most importantly, after we implemented more effective checklists to ensure uniform requirements for programmers and reviewers, we got a reduction in code review time from 80 to 30 per cent of development time.
No doubt, you can find more precise systems on the market, which gather data not from a single source but from multiple sources, and automatically measure other crucial metrics such as refactoring and code rewriting, providing even greater transparency in processes.
Examples of these systems include GitPrime (now Pluralsight Flow), Code Climate, and CodeScene. Give it a try and see how it can accelerate your team.
Key Takeaways
1. As businesses expand, it becomes increasingly challenging to understand the issues in processes that hinder effective growth. Managing small teams differs significantly from managing large engineering departments of 50 or more individuals, especially distributed ones.
2. Relying on instincts is not an option when you have measurable data at your disposal.
3. Explaining the need for hiring or development processes for the business should be done so that each point can be supported by numbers and expectations of how the process will improve if the request for additional funding is fulfilled.
4. Modern leaders must increasingly grasp data analysis methods and manage based on numbers; such is the industry’s demand.