Agile Accelerate

Leave Nothing on the Table


Leave a comment

Deliver Features to Customers 5 Times Faster

Does it seem to take a long time for your company to get a feature out the door? Have you done a “Five Whys” on the problem? Does one of the root causes seem to be the number of dependencies between teams, or “steps” that it takes to get something out the door? Chances are that it has something to do with the way your teams are structured.

The debate between feature teams and component teams is as old as Scrum is (which is getting to be Keith Richards territory). It usually boils down to:

  • Component teams are more efficient because a small number of SMEs who know the code inside out are the most effective to work on that part of the code. Anyone else interfering in the code will make a mess and create technical debt.

And the discussion about possible restructuring ends there.

It ends because there is also often a hidden reason why leaders don’t want to move to feature teams. If pressed, the following concerns may emerge (if not pressed, they will stay under the radar):

  • “As a leader, I am defined by the component I own, and the team(s) that work on it. I am also a SME in that component, which gives me job security. Moving to a feature team model would be threatening – Would I have a team? Would my people be scattered across multiple teams? How would i have time to attend all of those teams’ meetings? Would i still own a component?” And so on.

We will address these concerns later. But first, let’s look at a real case study…

I was a transformation coach at a large software company that was struggling with exactly this issue. Minor customer requests or enhancements seemed to take forever to deliver. So I worked with a colleague to do a deep dive into the problem. We selected a simple feature enhancement that took twelve weeks to deliver once the work began and inspected every Jira ticket that related to it. The following graphic shows the dependencies between all of the teams that were involved in the solution. It is blurred intentionally of course.

Each swim lane represented a different team – front end, back end, UX, globalization, various components, etc. What was fascinating about this investigation was that the aggregate amount of time expended by all of the teams was six times what a single team could do. Not because any of the teams were inefficient – on the contrary, they worked on their tickets efficiently. But, if it only took a day or two to do their part, it basically sat in the queue waiting for the next team to pick it up. And that was where the end-to-end inefficiency was.

I used the Flow Efficiency metric to get a sense for the level of our problem. Flow Efficiency is defined as Value-added Time/Total Elapsed Time expressed as a percentage. In this case, elapsed time was 6 sprints or 12 weeks (60 working days). The aggregate value-added effort was about 60 staff-days. But this metric is usually calculated on a per team basis, which is a straightforward calculation for a single team. However, when multiple teams are contributing to the Value-added Time, Flow Efficiency would be overstated. One can divide by the number of teams, which in this case would have given us a Flow Efficiency of just 8% or so. But I think that is misleading. So we used a modified metric, which was, effectively, “How long COULD it have taken with a single cross-functional team” versus “How long did it actually take?” With that metric, the Flow Efficiency was 16%.

This data was presented to senior leadership and was considered to be very eye opening. The obvious solution would be to create cross-functional feature teams. Of course, there were concerns:

  • New teams accessing code bases they would be unfamiliar with (of course, this points to deeper issues, like lack of automation and existing technical debt)
  • Loss of ownership of components and people (as mentioned above)
  • Some features might not lend themselves to the approach
  • A general concern about disrupting the organization

To make a long story short, we took an approach that addressed all of the concerns, which consisted of these elements:

  • Start small with a pilot program – we created 6 new teams, which represented only 10% of the org
  • Have a change management process to onboard middle managers and team managers
  • Designate SMEs on the component teams to reserve some bandwidth to help people who are new to the code base with software design decisions
  • Implement a fast-feedback continuous improvement program to quickly address concerns and resolve issues
  • Establish measurable metrics to represent the goals we wants to achieve, along with checks and balances

The metrics chosen were:

  • Flow Efficiency, defined as described above (this is like a lagging KR in OKR parlance to measure the key objective). Because we expected some initial challenges, we also measured how fast the teams ramped up to a steady state level of flow efficiency.
  • Subject Matter Expertise, to measure how quickly new developers got up to speed on unfamiliar components (leading KR)
  • Team satisfaction (balancing KR)

Of course, there were bumps along the way, but by all indicators, the program was very successful and Flow Efficiency was improved by a factor of 5. Across six different initiatives, average flow efficiency for the duration of the pilot was 80%. Even better, the teams ramped up to an average flow efficiency of 89% and this was done fairly quickly – an average of 1.3 sprints, or 2-3 weeks. 

Average team satisfaction increased by 30% over a period of six months, mostly because developers got out of their rut and learned new things. Subject matter expertise improved by 38% on an annualized basis.

Details of the methodology, practices, measurements, and learnings were presented in a white paper called “Winning the Concept to Cash Game with Feature Teams” at the 2021 XP Conference by Martina Ziegenfuss and myself.

Not unlike a stock investment disclaimer, actual results may vary and the variations may be substantial. But if you would like to deliver features to your customers five times faster, there is definitely value in considering an approach such as this.


Leave a comment

Extending Cross-Functionality to Programs

There is an excellent rationale for cross-functional teams.  For large programs, that rationale can be easily scaled to the program-level.  But, for some reason, this isn’t always recognized.

TEAM CROSS-FUNCTIONALITY

Let’s say you have a team with the following profile of highly siloed individuals:

xfunc1

This is great if you have a profile of stories that fits it perfectly, as follows:

xfunc2

But what if your set of sprint stories looks more like this?:

xfunc3

In this case, you have a deficiency of analysts, back-end developers, and QA people to implement the stories that your aggregate team capacity might otherwise support.  And, your UX folks and front-end developers will be twiddling their thumbs for part of the sprint.

So, what to do?

Since you are only as good as your lowest capacity component (which appears to be QA, from this particular example), you will have to scale back the number of stories to fit the profile, as shown:

xfunc4

Now, everyone is underutilized, except for QA.  Good luck finding something useful for everyone else to do in the next two weeks.

The net result is that your team is perhaps 30% less productive than it could be (eyeballing the graphic).

However, if you take advantage of standard cross-functional teamwork, your team’s profile may look something like this:

xfunc5

Note that by “cross-functional” we do not mean that everyone should be able to do anything.  There are very valid reasons (education, experience, proclivity, enthusiasm) why certain people are ideally suited for certain kinds of work.  Think of the cross-functional nature of someone’s role on a team as a bell curve (alternatively, some talk about T-shaped employees – the T is just the bell curve upside down, as the Y-axis orientation is arbitrary).  The more the curve is spread out, the more they are able to take on other people’s roles.  On a good cross-functional team, the bell curves overlap “somewhat,” meaning that everyone can take on a little bit of someone  else’s role, although perhaps not as efficiently.  Still, this allows a team to take on a wide variety of “profiles” of sprint work, as will always be necessary.

So, for example, in the case above,

xfunc6

people will adjust to the desired “sprint needs” profile as follows:

xfunc7

PROGRAM LEVEL CROSS-FUNCTIONALITY

Don’t forget that this model can be applied to more than just teams.

For example, there can be a tendency for teams to develop “specific expertise”, due perhaps to knowledge held by certain BSAs or specific architectural or design skills in the development team.  The program may then tends to assign stories based on this expertise under the theory that this is the most efficient way to get work done. Unfortunately, this has the effect of only further driving each team into a functional silo.  It can become a vicious spiral and soon you may hear things like “well, we don’t have generic teams and, at this point, the schedule is paramount, so we need to keep assigning program work according to the team best suited to do it.”  As a result, program backlogs will consist of stories pre-targeted to specific teams, even arbitrarily far out in time.  Imagine what happens when the stakeholders decide to re-prioritize epics or add new features, or a new dependency arises that doesn’t line up with the ideal team at the right time.  The result will be a work profile that doesn’t match the “team profile,” as follows:

xfunc8

Enter a cadre of fix-it people – project managers, oversight groups, resource managers, program managers – all trying to re-balance the backlog, shuffling stories around, adding people to teams, squeezing some teams to do more work, while other teams tend to be idle, therefore resulting in the assignment of less than necessary filler work.  It is the same wasteful resource management nightmare that is so easily solved by cross-functional teams, except this time at the program level.

So, eliminate the waste, and follow the following simple program level guidelines:

  1. Create a fully prioritized program backlog without consideration for the teams that will be executing the stories.
  2. Once per sprint, have a program planning session or meta-scrum (Uber-PO, Uber-SM, team representatives) where the candidate stories for the upcoming sprint are identified for each team.  Include a little more than each team’s velocity would otherwise indicate in case they are able to take on more than their average.
  3. Make it a goal to avoid specializing teams.

All team “profiles” will be identical and program needs can easily be accommodated.

xfunc9

There may be a little bit of short term inefficiency resulting from having the “slightly less than ideal” team work on particular stories, but the more you do this, the more that inefficiency evaporates.  And the advantages are significant:

  • Holistic view of program backlog allow you to focus on what is important – delivering value
  • No need to engage the expensive swat team of fix-it managers to shuffle around people and project artifacts
  • All team members gain experience and learning, often resulting in greater job satisfaction, and higher performing teams
  • No more single point of failure; no more critical path team
  • Far less chaos and confusion, resulting in more focused individuals
  • Extremely easy to manage – program progress is measured by the simple rate at which all teams work through the stories.  Any gaps in targeted scope versus expected scope is easy to identify.


Leave a comment

Agile Myths Busted

Ever run across these guys?  People whose lack of experience or fear of change cause them conjure up all kinds of reasons why agile won’t work for their project?

Let’s bust those myths!

Myth: Agile Doesn’t Work for Projects in the Highly Regulated Medical Environment.  (The reason usually given is that FDA regulations require detailed requirements prior to project approval; hence, waterfall.  However, in reality, you can develop in phases, with small incremental sets of requirements and the FDA requires only enough documentation to demonstrate your process.)

Truth: Abbott Labs overcame medical device regulation and stringent Class 3 certification and developed the m2000 Real-time PCR Diagnostics System, a human blood analysis tool, with four agile teams.  Compared to the prior methodology in use, this project resulted in a less cumbersome process, fewer defects, a reduction in costs of 43%, and a reduction in cycle time of 25%.

(Rasmussen, R., Hughes, T., Jenks, J. R., & Skach, J. (2009). Adopting agile in an FDA regulated environment. Proceedings of the Agile 2009 Conference (Agile 2009), Chicago, Illinois, USA, 151-155)

Myth: Agile Doesn’t Work in Government

Truth: The FBI overcame a CMMI level 3, ISO 9001, government-mandated document-driven waterfall life cycle and developed the Domestic Terrorist Database & Data Warehouse with three agile teams.  Compared to the prior methodology in use, this project resulted in significant improvements in release planning, developer satisfaction, and a focus on the true goal: “to catch bad guys.”

(Babuscio, J. (2009). How the FBI learned to catch bad guys one iteration at a time. Proceedings of the Agile 2009 Conference (Agile 2009), Chicago, Illinois, USA, 96-100.)

For another example, the U.S. Department of Defense developed the Strategic Knowledge Integration Website utilizing three agile teams.  Compared to the prior methodology in use, this project resulted in improved quality, fewer defects, better teamwork, and a 200% productivity increase.

(Fruhling, A., McDonald, P, & Dunbar, C. (2008). A case study: Introducing extreme programming in a U.S. government system development project. Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008), Waikaloa, Big Island, Hawaii,USA, 464-473.))

Myth: Agile Doesn’t Work for Large Products

Myth: Agile Doesn’t Work with Distributed Teams

Truth: Google’s AdWords product busts both of these myths.  With 20 teams and 140 people across 5 different countries, this large agile program was a groundbreaking success at Google and resulted in more predictable releases, higher quality, and an improved ability to accommodate changes, as compared to the prior methodology in use.

(Striebeck, M. (2006). Ssh: We are adding a process. Proceedings of the Agile 2006 Conference (Agile 2006), Minneapolis, Minnesota, USA, 193-201)

Myth: Agile Doesn’t Work in the Regulated Telecom environment

Truth: British Telecom moved their entire IT department to agile, starting with 2000 people from 2004-2007.  This large transformation resulted in an improvement from 10% value stream effectiveness to 55%, created an attitude of delivering real value to the business through IT, and shifted the company’s perception of IT from a service provider to an integral part of the business solutions.

(http://www.agilistapm.com/casestudy-british-telecom/http://scalingsoftwareagility.files.wordpress.com/2008/06/scrumbt-v14.pdf)

Myth: Agile Doesn’t Work for Client-based projects

Myth: Agile Doesn’t Work for Fixed Price projects

Myth: Agile Doesn’t Work well when integrating a Third Party Product

Truth: I coached an agile team at a prominent consulting company through a project with a client who was a well known record label.  They built a new, fully rebranded, eCommerce website using open source CMS and Search engine, and a third party eCommerce provider.  The site included product bundling, integrated music player, and social networking integration.  It was implemented using Scrum/XP with a single team of about 12 people over 5 months.  The result was an award-nominated site that improved conversion rates dramatically, ultimately profitable for and considered a strong success for both the agency and the client.

Myth: Agile Doesn’t Work for Manufacturing Vehicles

Truth: Wikispeed developed a 4 passenger, 100 mpg, street-legal road car in 3 months using modular, off-the-shelf, carbon-fiber body construction, with no capital investment, and no paid employees.  Agile processes were utilized with a single international team.  The project went beyond the prototype phase and cars are available online.

(http://www.solutionsiq.com/the-agile-ceo/bid/51480/Agile-Innovation-or-How-to-Design-and-Build-a-100-MPG-Road-Car-in-3-Months)

What else ya got?

(note: leads for some of these case studies came from David Rico’s presentation on Lean & Agile Project Management for Large Programs & Projects)


Leave a comment

The Math Behind Agile and Automation

Every once in a while, we encounter individuals on our teams who have a healthy dose of skepticism about these new Agile practices they are learning. For those who tend to be scientifically-minded, or need more evidence than just a good story, I have found that it is important to give them real data to look at.

As an example, it is interesting to combine the usual Cost per Defect curve for a software project with a histogram that maps the probability or frequency of finding defects to the corresponding project phase. The result is mathematical support for both Agile as well as the value of Automation and good Agile QA practices.

Figure A below shows the typical Cost curve for a waterfall project (source: The Economics of Testing, Rice Consulting) along with an overlay showing the probability of finding defects in various phases of the project.

Cost of Fix per Defect - Waterfall

As can be seen, most defects are found during the testing phase, as one might expect. Using some industry standard numbers, the cost to fix per defect is about $490.

Note, however, what would happen if you are able to find defects earlier in the process.

Cost of Fix per Defect - Modified Waterfall

Figure B shows the same cost curve, but the histogram representing the probability of defects found is pushed earlier in the process. The kinds of practices that might result in such a change include collaborative QA and developer testing, pairing, and automation (which helps prevent the defects from being found in the expensive tail of the curve). This doesn’t mean spending more on QA, just utilizing tighter feedback loops that help prevent defects from getting to the later phase of the project. So, even with a waterfall process, or a non-agile iterative process, one can easily see how collaborative testing and automation can reduce the cost of defects considerably, in this case down to $220 per defect.

The Agile cost curve is actually a little different, as shown in Figure C.

Cost of Fix per Defect - Agile, automated

There is still the hockey stick effect when the software goes into production, but the rest of the cost curve would be flat, since the cost of fixing is pretty much the same from iteration to iteration. The defect frequency histogram is drastically different and is flattened and spread out across the entire life cycle of the release.

In this model, Agile practices alone, such as sprint-based functional testing and having QA and developers working off of the same requirements, are responsible for about a factor of two improvement in overall cost of defect fixing per release. Automation and good collaborative practices are responsible for another factor of two, which gets the overall cost per defect down to about $130.