The DevOps Dilemma

Are we focusing on resource efficiency at the detriment of flow?

As many DevOps and Agile teams know all too well, teamwork very much makes the dream work in terms of flow efficiency. There are few things more satisfying than an efficient DevOps operation and ticking items off the to-do list. But with today’s DevOps teams often stretched thin, it’s easy to start focusing on the wrong things and neglect to consider the bigger picture. We’re talking about resource efficiency versus flow efficiency.

Prioritizing resource efficiency above flow efficiency could be holding teams back and causing significant discrepancies in terms of big-picture progress. In this article, we’ll discuss why examining and measuring how items flow through the system is just as important as assessing individual efficiency.

A better way to track teamwork

Focusing on the output of an individual contributor in a value stream could actually be harming the overall performance of the system. It might seem counterintuitive, but in essence, DevOps teams must begin to look at the bigger picture – in other words, the overall organizational efficiency – before highlighting and breaking down resource inefficiencies.

In order to achieve this, DevOps needs to start monitoring the right things. For example, tracking time spent on coding projects is great, but it only measures individual output, meaning you’re less likely to acknowledge the full impact of the collective group.

Instead of measuring individual resource input, managers could consider monitoring cycle time. Examining the duration from the start to finish point of each project could give a better idea of flow efficiency, before homing in on individual output. Aging is another metric that could be explored. Looking at how long something gets held back at a certain stage could allow managers to make better decisions in the future and shift allocations accordingly.

Likewise, it could also be worth monitoring WIP (work-in-progress) levels in each mode and aim to minimize this. Reducing batch sizes in both story size and movement of items between modes in the value stream could mean you have a steadier rate of progression. It’s also good practice to ensure that items progress all the way to completion through the value stream before allocating new tasks to the same team member.

Mitigating work starvation

One key challenge faced by DevOps teams arises when developers focus solely on completing their work in its entirety, meaning that they hold off on releasing tasks to the next phase until every task on their list is completed. This then creates bottlenecks, in turn resulting in inefficiencies in terms of wasted resources, time, and money.

Switching to smaller batch sizes could help to mitigate this issue. Large batch sizes often lead to ‘starvation’ of work in the testing or implementation areas of the value stream and tend to increase the cycle time. This is because the amount of work on someone’s plate at any given time can warp their sense of efficiency. Smaller batches enable speedier feedback on smaller iterations of new features and updates, allowing the project to progress quicker overall.

Making visibility a priority

To truly eliminate (or at least reduce) work starvation, and reach a smoother level of community effort, the entire team must gain better visibility of the entire process. Having a bigger picture view of the progress and status of the full value stream is essential in streamlining the flow of tasks throughout the product team.

Implementing a value stream management platform can lead to much greater clarity, enabling better visibility and control over every team, tool, and pipeline throughout the organization.

With the right software delivery dashboards, managers can better examine the rate of value delivery in contrast to desired business outcomes. More specifically, being able to analyze value stream flow metrics means businesses can view their overall production from a wider lens, empowering better knowledge and stronger decision making.

These valuable flow metrics can also provide better insight into the organization’s workflows in general. Naturally, achieving better consistency is the ultimate goal. With elements such as Cumulative Flow Diagrams (CFD), managers can see how efficiently any given task is progressing throughout the workflow.

Thanks to the clear way in which a CFD presents the data of a project, every team, and individual member can visualize how everything flows well, with no glitches, bottlenecks, or work starvation periods. Likewise, being able to see the bulges, inconsistencies, and discrepancies in graph form signals to managers that tasks are getting held up, not being completed, or aren’t being passed on to the next phase.

Occasionally, managers may notice that lines in a CFD can disappear altogether. That means someone is not getting work passed on from others, or one of the team members is keeping hold of their batches of work. Although progress will always be made – indicated by the trend of the graph never being in decline – managers will clearly see the areas where they will need to focus on honing better flow efficiency. By looking at the whole value stream in this way, project managers can synchronize their team’s tasks effectively and allocate duties so that everyone is working in tandem.

Essentially, many developer teams are unwittingly damaging the businesses’ overall efficiency simply by not seeing the bigger picture and focusing on resource efficiency which will often lead to flow inefficiency.

At a time when software development becomes increasingly competitive, agile and DevOps professionals must move away from the individual approach to value delivery and switch to a more system-centric way of managing to better optimize long-term flow efficiency.

BobDavis

Bob Davis, CMO at Plutora, has more than 30 years of engineering, marketing and sales management experience with high technology organisations from emerging start-ups to global 500 corporations. Before joining Plutora, Bob was the Chief Marketing Officer at Atlantis Computing, a provider of Software Defined and Hyper Converged solutions for enterprise customers. He has propelled company growth at data storage and IT management companies including Kaseya (co-founder, acquired by Insight Venture Partners), Sentilla, CA, Netreon (acquired by CA), Novell and Intel.

Ab Initio partners with BT Group to deliver big data

Luke Conrad • 24th October 2022

AI is becoming an increasingly important element of the digital transformation of many businesses. As well as introducing new opportunities, it also poses a number of challenges for IT teams and the data teams supporting them. Ab Initio has announced a partnership with BT Group to implement its big data management solutions on BT’s internal...

WAICF – Dive into AI visiting one of the most...

Delia Salinas • 10th March 2022

Every year Cannes held an international technological event called World Artificial Intelligence Cannes Festival, better known by its acronym WAICF. One of the most luxurious cities around the world, located on the French Riviera and host of the annual Cannes Film Festival, Midem, and Cannes Lions International Festival of Creativity. 

Bouncing back from a natural disaster with resilience

Amber Donovan-Stevens • 16th December 2021

In the last decade, we’ve seen some of the most extreme weather events since records began, all driven by our human impact on the plant. Businesses are rapidly trying to implement new green policies to do their part, but climate change has also forced businesses to adapt and redefine their disaster recovery approach. Curtis Preston,...