Agile Accelerate

Leave Nothing on the Table


Leave a comment

Deliver Features to Customers 5 Times Faster

Does it seem to take a long time for your company to get a feature out the door? Have you done a “Five Whys” on the problem? Does one of the root causes seem to be the number of dependencies between teams, or “steps” that it takes to get something out the door? Chances are that it has something to do with the way your teams are structured.

The debate between feature teams and component teams is as old as Scrum is (which is getting to be Keith Richards territory). It usually boils down to:

  • Component teams are more efficient because a small number of SMEs who know the code inside out are the most effective to work on that part of the code. Anyone else interfering in the code will make a mess and create technical debt.

And the discussion about possible restructuring ends there.

It ends because there is also often a hidden reason why leaders don’t want to move to feature teams. If pressed, the following concerns may emerge (if not pressed, they will stay under the radar):

  • “As a leader, I am defined by the component I own, and the team(s) that work on it. I am also a SME in that component, which gives me job security. Moving to a feature team model would be threatening – Would I have a team? Would my people be scattered across multiple teams? How would i have time to attend all of those teams’ meetings? Would i still own a component?” And so on.

We will address these concerns later. But first, let’s look at a real case study…

I was a transformation coach at a large software company that was struggling with exactly this issue. Minor customer requests or enhancements seemed to take forever to deliver. So I worked with a colleague to do a deep dive into the problem. We selected a simple feature enhancement that took twelve weeks to deliver once the work began and inspected every Jira ticket that related to it. The following graphic shows the dependencies between all of the teams that were involved in the solution. It is blurred intentionally of course.

Each swim lane represented a different team – front end, back end, UX, globalization, various components, etc. What was fascinating about this investigation was that the aggregate amount of time expended by all of the teams was six times what a single team could do. Not because any of the teams were inefficient – on the contrary, they worked on their tickets efficiently. But, if it only took a day or two to do their part, it basically sat in the queue waiting for the next team to pick it up. And that was where the end-to-end inefficiency was.

I used the Flow Efficiency metric to get a sense for the level of our problem. Flow Efficiency is defined as Value-added Time/Total Elapsed Time expressed as a percentage. In this case, elapsed time was 6 sprints or 12 weeks (60 working days). The aggregate value-added effort was about 60 staff-days. But this metric is usually calculated on a per team basis, which is a straightforward calculation for a single team. However, when multiple teams are contributing to the Value-added Time, Flow Efficiency would be overstated. One can divide by the number of teams, which in this case would have given us a Flow Efficiency of just 8% or so. But I think that is misleading. So we used a modified metric, which was, effectively, “How long COULD it have taken with a single cross-functional team” versus “How long did it actually take?” With that metric, the Flow Efficiency was 16%.

This data was presented to senior leadership and was considered to be very eye opening. The obvious solution would be to create cross-functional feature teams. Of course, there were concerns:

  • New teams accessing code bases they would be unfamiliar with (of course, this points to deeper issues, like lack of automation and existing technical debt)
  • Loss of ownership of components and people (as mentioned above)
  • Some features might not lend themselves to the approach
  • A general concern about disrupting the organization

To make a long story short, we took an approach that addressed all of the concerns, which consisted of these elements:

  • Start small with a pilot program – we created 6 new teams, which represented only 10% of the org
  • Have a change management process to onboard middle managers and team managers
  • Designate SMEs on the component teams to reserve some bandwidth to help people who are new to the code base with software design decisions
  • Implement a fast-feedback continuous improvement program to quickly address concerns and resolve issues
  • Establish measurable metrics to represent the goals we wants to achieve, along with checks and balances

The metrics chosen were:

  • Flow Efficiency, defined as described above (this is like a lagging KR in OKR parlance to measure the key objective). Because we expected some initial challenges, we also measured how fast the teams ramped up to a steady state level of flow efficiency.
  • Subject Matter Expertise, to measure how quickly new developers got up to speed on unfamiliar components (leading KR)
  • Team satisfaction (balancing KR)

Of course, there were bumps along the way, but by all indicators, the program was very successful and Flow Efficiency was improved by a factor of 5. Across six different initiatives, average flow efficiency for the duration of the pilot was 80%. Even better, the teams ramped up to an average flow efficiency of 89% and this was done fairly quickly – an average of 1.3 sprints, or 2-3 weeks. 

Average team satisfaction increased by 30% over a period of six months, mostly because developers got out of their rut and learned new things. Subject matter expertise improved by 38% on an annualized basis.

Details of the methodology, practices, measurements, and learnings were presented in a white paper called “Winning the Concept to Cash Game with Feature Teams” at the 2021 XP Conference by Martina Ziegenfuss and myself.

Not unlike a stock investment disclaimer, actual results may vary and the variations may be substantial. But if you would like to deliver features to your customers five times faster, there is definitely value in considering an approach such as this.


Leave a comment

An Experiment in Learning, Agile & Lean Startup Style

I always have a backlog of non-fiction books to read. Given the amount of free time that I have every day, I am guessing that it may be years before I get through them. In fact, the rate at which books get added to my backlog probably exceeds my learning velocity, creating an ever-increasing gap. It feels like a microcosm of Eddie Obeng’s “world after midnight.”

So what to do?

books800I am trying to increase my velocity by applying speed reading techniques. But so far, that is probably only closing a small percentage of the gap.

Iterative Learning

Then, upon a bit of soul searching, I had an epiphany. Why do I feel the need to read and understand every single word on every single page? This runs counter to what we coach our teams to do—eliminate waste, only document what makes sense, just-in-time practices, and applying iterative thinking instead of only incremental. The answer seemed to be that I don’t feel that I have really read the book if I haven’t read every word. So what? Am I trying to conquer the thing? It seems like a very egocentric point of view.

What if I was able to let go of the ego, and try to read a book iteratively instead of incrementally? Is it even possible? Would it be effective? There are all sorts of ways to tell stories or build products—top-down, bottom-up, inside-out—each of which have their strong points. Sometimes it is most effective, for instance, to grab the user’s attention by initially giving them a nugget that might logically be placed in the middle of a narrative, and then providing necessary foundation, or by filling in the gaps as necessary. Could one apply the same process to learning from a book? I could imagine scanning through a book randomly, stopping at points that looked interesting and digesting a bit—much like I used to do with encyclopedias as a kid. Or, maybe, first reviewing the TOC for areas of interest, jumping to those sections, absorbing a bit, and then searching for any context that was missing.  This would be a completely different way to learn from a book. I couldn’t call it reading, and don’t have a good term for it, other than a new kind of learning.

This led me to thinking a little more deeply about what I am trying to get out of reading; the learning aspect of it. What if I could scan a book in a tenth of the time that it took to read it, but retain half of the content? Would that be an improvement? There seems to be some sort of formula that I am trying to maximize, like dl/dt=CVR: Rate of learning equals the “learn-worthy” content of the book multiplied by the speed that I scan it multiplied by the percent that I retain. Is the percent retained equal to the percent value obtained? Do I get half the potential value of a book if I retain half as much? I could simply define R to be the percent value and my equation still holds. Something in the back of my mind says this it is really sad to look at learning this way. Something else says I am on to something.

Of course, there are all kinds of nuances.  For example, some books build upon a foundation which must be well understood to get any value at all out of the latter sections of the book.  For others, it may be easier to skip around. Some, you may be able to get value out of scanning the TOC, or the subheadings, digesting the graphics, or just reading the intros and summaries of each chapter; for others, not so much.  Hence, in a sense, different books have different learning profiles.

The Experiment

I was intrigued enough to attempt this on a book near the top of my backlog: Steven Wolfram’s A New Kind of Science, a 1280-page tome that took him ten years to write. So I did it. I didn’t “read” it. I iterated through it and digested some of it. And can honestly say that, for this particular book, I optimized my learning rate equation significantly. I can’t be sure of the total potential value that the book would have to me were I to read it in its entirely, but from what I digested, I feel like I got about 50% in about 5% of the time—a tenfold increase in my learning rate. And Steven got his royalty. Yes, I do appreciate the irony of using a new kind of learning on A New Kind of Science. And letting go of the idea of conquering a book was kind of liberating.

So, what if we look at a particular learning objective in the same way that we manage a large project or program? I am imagining a vision or an objective like “I want to become learned in Digital Philosophy” (one of my particular interests.) That vision results in the creation of a backlog of books, papers, blogs, etc. The larger of these (books) are epics and can be broken down into stories, like “Scan contents to get a sense of the material,” “Determine the core messages of the book by finding and reading the key points,” “Understand this author’s view on this particular topic,” and so on. By thinking about learning material this way, it opens up all kinds of new possibilities. For example, maybe there is another way to slice the backlog, such as by topic. If the most important thing to further my overall objective is to understand everything about cellular automata, I would assign higher priority to the stories related to that topic, even if they come from separate sources. So, my learning process takes a different path; one that slices through different material non-linearly.

Lean Startup Learning & Continuous Improvement

In fact, this all feels a bit to me like a lean startup approach to learning in that you can experiment with different chunks of material that may point you in different directions, depending on the outcome of the reading experiment. Having a finer backlog of reading components and being willing to let go of the need to conquer reading material might make possible a much faster path to an ultimate learning objective.

And so I am passing along this idea as an option for those who have a voracious desire to learn in this after-midnight world, but have a before-midnight backlog of reading material.


Leave a comment

Intuition & Innovation in the Age of Uncertainty

“My [trading] decisions are really made using a combination of theory and instinct. If you like, you may call it intuition.” – George Soros

“The intellect has little to do on the road to discovery. There comes a leap in consciousness, call it intuition or what you will, and the solution comes to you, and you don’t know how or why.” – Albert Einstein

“The only real valuable thing is intuition.” – Albert Einstein

“Don’t let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition.” – Steve Jobs

Have you ever considered why it is that you decide some of the things that you do? Like how to divide your time across the multiple projects or activities that you have at work, how and when to discipline your kids, where to go and what to do on vacation, which car to buy?

The ridiculously slow way to figure these things out is to do an exhaustive analysis on all of the options, potential outcomes and probabilities. This can be extremely difficult when the parameters of the analysis are constantly changing, as is often the case. Such analysis is making use of your conscious mind.

The other option is to use your subconscious mind and make a quick intuitive decision.

flip-coin_5

We who have been educated in the West, and especially those of us who received our training in engineering or the sciences, are conditioned to believe that “analysis” represents rigorous logical scientific thinking and “intuition” represents new-age claptrap. Analysis good, intuition silly.

This view is quite inaccurate.

Intuition Leads to Quick, Accurate Decisions

According to Gary Klein, ex-Marine, psychologist, and author of the book “The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work,” 90% of the critical decisions that we make are made by intuition in any case. Intuition can actually be a far more accurate and certainly faster way to make an important decision. Here’s why…

The mind is often considered to be composed of two parts – conscious and subconscious. Admittedly, this division may be somewhat arbitrary, but it is also realistic.

The conscious mind is that part of the mind that deals with your current awareness (sensations, perceptions, feelings, fantasies, memories, etc.) Research shows that the information processing rate of the conscious mind is actually very low. Dr. Timothy Wilson from the University of Virginia estimates, in his book “Strangers to Ourselves: Discovering the Adaptive Unconscious,” the conscious mind’s processing capacity to be only 40 bits per second. Tor Nørretranders, author of “The User Illusion”, estimates the rate to be even lower at only 16 bits per second. In terms of the number of items that can be retained at one time by the conscious mind, estimates vary from 4 – 7, with the lower number being reported in a 2008 study by the National Academy of Sciences.

Contrast that with the subconscious mind, which is responsible for all sorts of things: autonomous functions, subliminal perceptions (all of that data streaming in to your five sensory interfaces that you barely notice), implicit thought, implicit learning, automatic skills, association, implicit memory, and automatic processing. Much of this can be combined into what we consider “intuition.” Estimates for the information processing capacity and storage capacity of the subconscious mind vary widely, but they are all orders of magnitude larger than their conscious counterparts. Dr. Bruce Lipton, in “The Biology of Belief,” notes that the processing rate is at least 20 Mbits/sec and maybe as high as 400 Gbits/sec. Estimates for storage capacity are as high as 2.5 petabytes.

Isn’t it interesting that the rigorous analysis that we are so proud of is effectively done on a processing system that is excruciatingly slow and has little memory capacity? Whereas, intuition is effectively done on a processing system that is blazingly fast and contains an unimaginable amount of data.

In fact, that’s what intuition is – the same analysis that you might consider doing consciously, but doing it instead with access to far more data, such as your entire wealth of experience, and the entire set of knowledge to which you have ever been exposed.

Innovation Is Fueled by Intuition

The importance of intuition only grows exponentially with every year that passes.  Here’s why…

Eddie Obeng is the Professor at the School of Entrepreneurship and Innovation, HenleyBusinessSchool, in the UK. He gave a TED talk which nicely captured the essence of our times, in terms of information overload. Figure 1 from that talk demonstrates what we all know and feel is happening to us:

Eddie_Obeng_Uncertainty

The horizontal axis is time, with “now” being all the way to the right. The vertical axis depicts information rate.

The green curve represents the rate at which we humans can absorb information, aka “learn.” It doesn’t change much over time because our biology stays pretty much the same. The red curve represents the rate at which information is coming at us.

Clearly, there was a time in the past, where we had the luxury of being able to take the necessary time to absorb all of the information necessary to understand the task, or project at hand. If you are over 40, you probably remember working in such an environment.

At some point, however, the incoming data rate exceeded our capacity to absorb it; television news broadcasts with two or three rolling tickers, tabloids, zillions of web sites to scan, Facebook posts, tweets, texts, blogs, social networks, information repositories, big data, etc. In our work place, projects typically have many dependencies on information from other teams, stakeholders, technologies, end users, and leadership, all of which are constantly changing.

It is easy to see that as time goes on, the ratio of unprocessed incoming information to human learning capacity grows exponentially. What this means is that there is increasingly more uncertainty in our world, because we just don’t have the ability to absorb the information needed to be “certain,” like we used to. Some call it “The Age of Uncertainty.” Some refer to the need to be “comfortable with ambiguity.”

This is a true paradigm shift. It demands entirely new ways of doing business, of structuring companies, of planning, of living. In my job, I help companies come to terms with these changes by implementing agile and lean processes, structures, and frameworks in order for them to be more adaptable to the constantly changing environment. Such processes are well suited for the organizational context in any case given that organizations are complex systems (as opposed to “complicated” ones, in Cynefin, or systems theory, parlance). But they are also the only kinds of processes that will be effective in this new environment because they embrace the idea of sensing and responding to change instead of requiring rigorous analysis to establish a predictable plan.

We no longer have time to do the rigorous analysis necessary to make the multitude of decisions with which we are confronted on a daily basis. Instead, we increasingly need to rely on our intuition. But, while we often concentrate our energies on improving specific technical or leadership skills, we rarely consider the idea that perhaps we can make better use of that powerful subconscious mind apparatus by improving the effectiveness of our intuition. It seems to me that this is a significantly missed opportunity, one that deserves more and more of our attention with every passing year.

Intuition Can Be Developed

Sounds as if intuition is a skill that could be very useful to hone, if possible. So how do we develop that capability?  Here are some ideas:

  • Have positive intent and an open mind – The first step to any new idea is to accept it. Think of it as “greasing the learning skids.”
  • Put yourself in situations where you gain more experience about the desired subject(s) – Intuition works best when you have a lot of experiences from which to draw. If you continue to do the same thing over and over, you are not building new experiences.  Therefore, the more you depart from the norm and from your comfort zone, and develop experiences in your area of interest, the more substantial your “intuitive database.”
  • Meditate / develop point-focus – Meditation develops all sorts of interesting personal capabilities, not least of which is an improved capacity to intuit.
  • Go with first thing that comes to mind – Effectively, you are practicing intuition by doing this. In time, the practice will lead to more effective use of the capability.
  • Notice impressions, connections, coincidences (a journal or buddy may help) – This reinforces the intuitive pathways of the mind. Neuroplasticity is a well-studied phenomenon whereby your thoughts develop incremental neural connections. Reinforcing the positive ones makes them more available for use.
  • 2-column exercises – Another mindfulness technique, these exercises help to raise you awareness of your mental processes, including your subconscious.
  • Visualize success – Think of this as applying the idea of neuroplasticity to build a set of success-oriented neural pathways in your mind.
  • Follow your path – Following a path that feels right to you does two things: First, it puts you into increasingly rewarding situations, generating positive feedback, which helps with all of the above practices. Second, it is simply practicing intuition, but specifically on what your subconscious mind knows are your best decisions.

I am doing many of these practices and finding them to be very valuable.