On Effect Mapping and Pirate Metrics

During the Specification by Example training I talked about recently, Gojko Adzic introduced me to Effect Mapping. He’s writing a more extensive booklet on the subject, of which he’s released a beta here. I think this is an excellent tool for exploring goals, opportunities and possible features. It can be used as a tool to generate a backlog of features, as a way to explore possible business hyptheses, and perhaps even as a light-weight way to do strategic management of a company.

But let’s start with a short description (see Gojko’s site or beta booklet for the longer one) of what effect mapping is.

Effect Mapping basics

The basic structure of an effect map is that of a structured Mind-Map. A mind-map is a somewhat hierarchical way to note down ideas related to a central theme.

A Mind Map

A Mind Map

The effect map is a mind-map with a specific structure. The different levels of the mind-map are based around the answers to four questions:

  • Why? (The Goal)
  • Who? (Who can have a role in reaching that goal; Or preventing it)
  • How? (In what way can they help, business activities)
  • What? (What are the concrete software features to make; Or non-software actions to take)

Additional levels can be specific User Stories, tasks, or actions, but that depends more on how you want to organise your backlog. The important thing is that this provides an uninterrupted flow from high-level goal or vision to concrete work.

In this way, effect maps can provide one of the important missing steps in the Agile software development world: how to determine what features provide value supporting specific business goals.

The goal used in the centre of the effect map is supposed to be a measurable goal: we need to know unambiguously when this goal has been reached! Gojko gives a nice overview of the way this can be done, using a lighter-weight version of the way Tom Gilb prescribes for making goals measurable. This involves the scale (thing we’re measuring), meter (way we’ll measure it), benchmark (current state), constraint (minimum acceptable value, break even point), target (what we want it to be).

Effect mapping is not just about what ends up on the diagram, it’s also the process of generating the map. This is, of course, a collaborative approach. Getting the involved people together, and creating and discussing the goal, and the ways to get there. Important is that both business people with decision power as well as subject matter experts and technical people that can know possible solution directions should be present. And yes, they do have time for this. Using diverge and merge, as discussed in my previous post, can be very useful again. There’s more, but it’s not relevant to the rest of my post, about iterating, prioritising, etc. So just go and read the booklet, already.

Our Effect Map

In his booklet, Gojko also links this process to the Lean Startup process of customer development. I think this is a great combination, but I would like to see some tweaks in the type of measurements we use for goals in that case.

Actionable Metrics for Pirates

In his book The Lean Startup, Eric Ries talks extensively about the importance of Actionable Metrics, as opposed to Vanity MetricsAn actionable metric is one that ties specific and repeatable actions to observed results. A Vanity Metric is usually a more generic metric (such as total number of hits on a website, or total number of customers), that is not tied to specific (let alone repeatable) actions. There can be many reasons why those change, and isolating the reason is one part of making metrics actionable.

Dave McClure user the term ‘Pirate Metrics‘ to talk about the most important metrics he sees for organisations:Pirate Flag

A – Acquisition – User is directed to your site;

A – Activation – User signs up or is otherwise engaged;

R – Retention – User keeps coming back, i.e., is engaged over time;

R – Referral – User invites others;

R – Revenue – User pays or is otherwise monetized;

These are also the familiar ‘funnel’ metrics, and the above link to Ash Maurya’s site has much more background on them. When using these metrics, it is recommended to do Cohort testing, so that you can see the different results for different groups of users (again, see Ash Maurya’s site). Doing this such that the source of the (new) users is trackable allows you to identify the best ways of increasing the number of users, without deluding yourself into extrapolating from great growth figures if they’re the result of a single marketing action.

This is probably the only spot where Gojko’s booklet could use some tuning. The Gilbian metrics he uses for his example are functional metrics (SMART, and all that), but not the type of actionable metrics that would fit into the lean startup mold. In this example, we’re talking about reaching a certain number of player (one million), in a certain space of time (6 months), while keeping costs and retention rates to certain limits. Gojko does of course explain how to do this iteratively, taking a partial goal (less player than in the end-goal) and checking at a defined milestone whether the goal was reached. And because we’ve only done one ‘branch’ of the effect map, we do have a specific action to link to the result.

If we were to re-cast this more along the lines of our pirate metrics, we could rephrase the goal as increasing the Acquisition and Activation rates. To be clear: this is a whole other goal! This goal is about a change that will ensure structural growth in the longer term. The goal on 1M extra users could (perhaps) be reached by increasing marketing spending (note that only  operational costs are taken into account in the original example). In that case, the goal would be reached, but given that there is a typical Retention rate of (for example) 6 months, then there would be an equivocal exodus of users after that time. If the company had reacted based on total number of users, this could lead to incorrect actions such as hiring extra people, etc.

If the CEO comes down with the message ‘We want 1 million users!’, and it turns out he wants that in about 6 months time, we can then say, “Ok, that’s about 5500 new users per day, or
a growth rate of 1.6%” (per day, again). Then we can start creating and testing hypotheses in much smaller steps than those 6 or three months. What’s more, by using these metrics (and goals) it should become possible to use the same metrics further out into the effect map.

And if this is something at a larger scale, teams can take some of the higher level hypotheses / business activities and use their own domain knowledge to devise more experiments to find a way to reach those goals. Since churn is included in the figure, this also allows experiments based on increasing retention rates. And since we’re doing cohort testing on pirate metrics, we can know day-to-day and week-to-week whether what we’re doing has the expected results. This extends the set-based design paradigm used in effect mapping to a broader organisation.

Strategy and organisational structure

So by using effect mapping, we can make the relation between high-level goals, stakeholders and intermediate level goals visible, using a collaborative process involving different parts and levels of the organisation. By using a consistent set of (value / end-result focused) metrics that can be applied in both the short and longer term, throughout the organisation, we can enable all levels of the organisation to apply their knowledge and skills to reach those goals. And thus allows for more self-organisation (and experimentation) at all levels…

I recently came across the system of Hoshin Kanri (via Bob Marshall). This has some remarkable similarities to what I’ve been talking about. It’s also a method to ensure policy/goals can be distributed throughout the organisation, with involvement of multiple levels and stakeholders at each step. I’ve not studied it extensively, but to me it does feel like a strictly hierarchical system, and one that is mostly used in large companies with a fairly slow (yearly and half-yearly) cycle time. It is used by Toyota, and is part of Total Quality Control, which is supposed to be “designed to use the collective thinking power of all employees to make their organization the best in its field.” It’s nice to see that everything has already been thought of, and we’re just repeating the progress of the past:-)

The difference with effect mapping is its light-weight focus, and the ease with which effect mapping can more easily be used in less hierarchical organisations. In fact, I think such a system might well be a prerequisite to the type of organisations we’re talking about in light of the Stoos network: “learning networks of individuals creating value”. My recent proposal for a session at the Stoos Stampede was precisely about finding out how we can link an organisation’s vision/mission to goals specific enough that teams can work towards them independently, but open enough that they would not suffocate and entrap those teams. I think this might be one solution to that problem.

Stoos Stampede

Management Innovation, ca. 1972

SilosYesterday, after my brother’s 47th birthday, I was talking with my father. My father is 79, and he has had an interesting professional life. He started out as a catholic priest but, as you could guess from the fact of my existence, at some point figured out that this was not a sustainable career path for him.

While talking, my father touched upon the subject of work, and the importance of working for the right reasons. He discussed people that had found a work/life balance that worked for them, by not focusing on financial rewards, but on the contents of the work, and the need to spend time on their families. Of course he sees that many people, including himself at one point, have manoeuvred themselves in positions where making those choices has become very difficult, if not impossible. Certainly many people now are in positions where two full-time salaries are needed to be able to keep paying the mortgage. But focusing on intrinsic motivation for work is very important for your enjoyment of work, and life.

We also talked about management a bit. In 1972, also the year of my own vintage, my father became director of the social services department (‘Sociale Zaken’) of the city of Haarlem. This organisation of around 250 people was then organised in strict silos, where social workers, legal people, administrative work and other departments worked separately, and with little understanding of each other’s specific challenges. Clients (people who were waiting to hear whether they would be getting social support, usually) regularly had to wait until each of those departments had had their say, often adding up to a 4 month wait before their application was either approved or denied. Meanwhile social workers were reporting their conclusions to those client back in language that  was very hard to understand. All of this resulted in very unhappy clients. And thus for the people that had the direct contact with those client not always being treated in the kindest ways.

My father apparently discussed the use of understandable language extensively with the social workers, at one point demonstrating the problem of jargon by slipping back into some latin phrases

Quidquid recipitur ad modum recipientis recipitur – “Whatever is received is received according to the mode of the receiver.”

I have, having failed spectacularly at the one year of Latin I had in school, no idea if this was the actual phrase he used. But it does seem apt.

He also instigated a reorganisation of the service. The reorganisation moved from the siloed system to one where different geographical parts of the city were going to be services by multi-functional teams, consisting of social workers, legal experts, administrative workers and directly client-facing people. The goal was to reduce time wasted with hand-overs, breed understanding for people with another area of expertise, and create more understanding for the clients predicaments.

This move was not welcomed by the employees. So much so that unions were involved, strikes threatened, and silo walls reinforced. In the end, it happened anyway. The arguments were solid, and even with unions involved, there was a rather clear power structure in place. And eventually, when the dust had settled down, the results were very positive indeed. Client satisfaction was up and response times were down (I think it went from roughly 4 months to two weeks). But just as important, internal strife was removed to a large extend, employees were picking up skills (and work!) outside of their area of expertise, and employee satisfaction was considerably better.

Now for me, this is not surprising. It’s not surprising because I know and have lived the results of similar ways of working in Agile teams.

Or maybe I’ve been doing that because, in the end, some values are instilled at an early age, and I had a great example.

The Strategic Inflection Point as a Special Case Pivot

I’ve noticed that I very regularly get people visiting my blog through a Google search for the term ‘Strategic Inflection Point’. Since that term has some very direct connections to other concepts I’ve been learning about, I thought I’d give some detail on Strategic Inflection Points, and their relation to the Lean Startup ideas of Pivots and Pirate Metrics.

I once reported on a presentation by Mary Poppendieck at the Lean and Kanban conference in Antwerp of 2010, where she mentions Andy Grove‘s book ‘Only the paranoid survive‘. This 1999 book deals with the way Grove ran Intel, and includes the concept of the Strategic Inflection Point, also described by Grove in his 1998 speech at the Academy of Management.

Strategic Inflection Point, copied from Mary's sheets

Grove describes the Inflection Point:

A Strategic Inflection Point is that which causes you to make a fundamental change in business strategy.

For anyone who’s been keeping up with the discussion on the Lean Start-up will see the similarity with Eric Ries’ concept of the Pivot:

“A structured course correction designed to test a new fundamental hypothesis about the product, strategy, and engine of growth.” – Eric Ries, The Lean Startup

The language is a bit different, of course, but there are obvious places of overlap. Grove’s Inflection Point has a main emphasis on external factors changing, necessitating change from the side of the company.

A major change due to introduction of new technologies. A major change due to the introduction of a different regulatory environment. The major change can be simply a change in the customers’ values, a change in what customers prefer.

A Pivot can be seen as a reaction to encountering a Strategic Inflection Point. Pivots also happen in search of the correct strategy, product of market fit, which makes them more active and Ries’ structured approach to identifying the need for a pivot seems to be exactly what Grove is looking for.

The biggest difficulty with Strategic Inflection Points is telling one from the many changes that impinge on you in the business. How do you know if a change is just a garden variety change or qualified to be this monumental, catastrophic change category that we call an Inflection Point? I could never come up with a particularly satisfactory answer to that question. And I’m quite sure that if I could, I wouldn’t be talking about them. A set of principles along those lines would be extremely helpful as a competitive weapon because we deal with changes every day.

Ries’ innovation accounting provides the tools necessary to identify that Inflection Point. Pirate Metrics, as coined by Dave McClure,  can provide an early warning that the strategy for a product is not, or no longer, working. And there are some different variants possible for different market situations.

Innovation accounting is the structured approach of doing cohort testing to determine the viability of a product’s engine of growth. This is not specific to a start-up. You can track Acquisition, Activation, Retention, Referral and Revenue for any product, whether in a small start-up trying to find the right product and sales model or an existing company that needs to track the health of existing products and models.

So if you’re worried about running into a Strategic Inflection Point for your business, this is what you do: start keeping (actionable) metrics on how your products are doing, what the growth rates are for their engines of growth, make those numbers clear and visible for everyone in your company, from the C-suite to the janitors, and if you do see a need to pivot, do so with clearly defined hypotheses, and check the results.

 

 

 

 

Turning it up to 11

Turning It Up To 11It’s odd how I’ve been unable to be very consistent in my subject-matter for this blog. I tend to hop around, going from very technical subject to very organisational ones. Some might see this as lacking focus. Maybe that’s true. I’ve never been able to separate execution from organisation and vision very well. To me they seem intrinsically linked. It’s comforting to me that even such luminaries as Kent Beck also seem to see things in this light.

If I look at my bifurcated (tri-? n-?) interests, I see a striking resemblance in the states of technological, managerial and commercial maturity in the world. In all of these areas, the state of affairs is abysmal. In all three areas, we seem to have recognised that this is the case. In all three, though, most people performing those roles are so used to the current state that only rarely do they see that a different approach could bring improvement. Could turn their work ‘up to 11’. There are some differences, though.

Technology

On the technology side, we’ve pretty much identified what works, and what doesn’t. Basically, XP got things right. Others before that also hit the right spot, but we know a mature team sticking to XP practices will not mess things up beyond salvation. If we compare that approach to what one finds in the run-of-the-mill waterfall situation, the differences are so great that there is truly no comparison. There are other questions still at least partially open, but most of those are concerned with scale, organisation, and finding out what should be built. And thus belong in the other categories. The main challenge is one of education. And, granted, a bit of proselytising.

Commercial

More commercial questions are less clear-cut, at least for me. In my work I’ve very rarely seen commercial, product development and marketing decisions taken with anything resembling a structured approach of any kind of rigour. A business case, if one is available at al is often only superficial, and almost never comes with any defined metrics and decision moments. The Lean Start-up movement is the only place I’ve seen that is trying to improve that. Taking this approach out of the start-up and into all the product development and marketing departments in the world is going to take a while, but it will happen. If only because companies capable of doing that will completely out-perform the ones that don’t.

I don’t think the case here is as clear cut as on the technology side, but we have a start. The principles of the Lean Start-up are based on the same ideas as Agile development: know what you want the result to be (validated learning) and iterate using short feedback loops. What to do, exactly, in those feedback loops is known for some types of learning, in some situations, but we’re still working on expanding our knowledge and skills in this area.

Management

As the solutions for the commercial and technical sides of things are rooted in experimentation and short feedback cycles, one might assume that the same would be true for the management side of things. And it’s true that those techniques have value in management in many situations. Many of the ideas on management are based on feedback cycles, Lean/Deming’s PDCA is one, for instance, but Cynefin‘s way of dealing with systems in the complex area is another. But we do seem to have many different ideas about how management should be done, how organisations should be structured and what gives people the best environment to work in.

One place where some of these ideas have gotten together is the Stoos Network. It’s interesting because of the different backgrounds of the people involved: Agile, Beyond Budgeting, Radical Management, Industry Leaders. Their initial gettogether this year resulted in a shared vision, with again an emphasis on learning.

“Organizations can become learning networks of individuals creating value, and the role of leaders should include the stewardship of the living rather than the management of the machine.” — Stoos Communique

This clearly expresses some of the shared values of the Stoos people, but still leaves quite a lot to the imagination. The people and ideas involved are interesting enough that I’ve volunteered to help organise one of the follow-up meetings,  the ‘Stoos Stampede’, which takes place in Amsterdam, 6 and 7 July.

Next to Stoos, as I said before, there are many ideas on how to change management. Lean has had an impact, but though the Toyota Way certainly does talk about people and how to support them in an organisation, this is not the prime focus of most Lean implementations. CALM has started talking about combining Complexity, Agile and Lean ideas, but so far has also not posted any results.  We’re still a bit lost at sea, here.

So what would we need from a new management philosophy?

  • We’d need to know how to structure an organisation. Stoos clearly think the current semi-hierarchical default is not workable for the future, or at the very least severely suboptimal. But what do ‘learning networks’ look like? And how do we grow them?
  • We’d need to know how to provide the organisation with a purpose. A Mission, a Vision, a Goal. Whatever you want to call it. Most organisations do have some sort of mission statement, but it is usually so far removed from the everyday practice of everyone working within the organisation that it might as well be absent.
  • We’d need to know how to connect that purpose to the rest of the organisation. How do we link the work of everyone in the organisation to its stated purpose? If the mission is specific this should be possible. But if we connect the work too tightly, it could be stifling.
  • We’d need to know how to connect the organisation with its customers, its suppliers, its partners. This would be different out of necessity, as the structure of the organisation itself is different. It would also be different out of philosophy, as those relations take on different meaning is the goals of the organisation outside of the monetary rise in importance.
  • We’d need to know how to align such organisations with the demands of the outside marketplace and governance. If the organisation is more oriented towards longer term viability and purposeful behaviour, this might have a good long term effect on profitability, but will certainly in the short term have a different financial behaviour. And budgeting and bookkeeping are areas that need very specific attention with an eye on the external rules these subjects need to comply with.

But apart from what new management would do to the idea of an organisation, there are also questions related more to the question of how to get there from here. Why would current managers want to change their organisations? Why would they want to change so drastically? There are plenty of reasons, but would they be convincing to the current CxO? What would they need to learn to be able to execute on such a vision? Will everyone enjoy working in these kinds of more empowering organisations, or will some people prefer something more hierarchical?

All of these things I want to know. Some of them we’ll discuss during the Stoos Stampede (propose a subject to discuss!), but personally I think we’re still at the very earliest stages of this particular change. In the mean time, we do have a few good examples, and some patterns that seem to work, and I’m going to try and get a few more organisations turned up to 11.

Estimates and Commitments – The Hard Truth

My esteemed colleague, Ciarán ÓNéill just posted a nice and considered discussion on estimation, velocity and cycle time. I, however, do not plan on being so considered, or considerate.

You see, too many people are bent under the crushing weight of living up to estimates. Even reckoning that they provided these estimates to begin with, the continuing focus on this fragile, incomplete, numerical slice of their work is having a seriously detrimental effect on our industry.

I can’t count how many times I’ve seen this playing out, in different companies. The poor product manager, even if he’s sometimes called Product Owner, has made his business case; calculated the expected ROI; probably even spent lots of time plotting the Rate of Return over time; thinking of ways to track the relevant figures closely so as to be able to adjust the approach over time; and shared these ideas with the broadest set of people within the company to get as much feedback as possible.

He’s calculated expected additional visitors to the website, impact on conversion and retention rates, and put in place the instruments to verify those numbers. He’s had his numbers checked by his peers, and carefully explained them in detail to the development team he’s expecting the work to be done by. He’s done everything he can to make sure his estimates are as accurate as he can make them.

And then he’s done the additional work to make sure he can not only track whether he’s on-track, but also proposed different scenarios on how to react if they’re not. He’s done all that for all the different features, and new products, that he deals with, and used the expected returns to prioritise work in the way that profits the company most.

Now he knows, our intrepid product manager, that he’s dealing with a complex issue! He’s been careful to include confidence margins, bandwidths of possible returns, and comments stressing uncertainties and risk factors. He’s done everything right.

So what happens after a few sprints, when the initial (bare minimum) features have seen the day of light and basked in the glory of unadulterated customer scorn? He’s still in the bandwidth of expected returns, but he can see that he’ll probably end-up on the lower end of the range he expected. At the iteration review he shows the current results to the development team (there’s a few managers in the audience as well), shows how things are progressing, where he sees ways to change the feature to get more customers interested, and how the priorities need to be adjusted to make this happen.

Some of the feature, sometimes a lot of it, needs to be cut. Including some functionality already released! After all, no-one is actually using that. And to make it a success, the team will probably have more work to do than was originally expected. In the end, though, they still have a very good chance of making this a success.

And that’s where the trouble hits home. The poor guy is lambasted for not being able to stick to his estimates. He came up with those estimates himself, he should make them work. After all, the numbers added-up, didn’t they? Everyone else is doing their part, and if he’d just put a little effort into it, he really should be able to stick to the plan, and ensure that customers start using the new feature as was planned. Perhaps adding more advertising for this feature is what is needed, and budget should be found for some additional effort on that part. Or maybe, since he’s the one not delivering, he should do that advertising work himself over the weekend.

It’s cruel. We, as an industry, should stop holding our product managers to standards that are impossible to reach. We should accept the inherent complexity of the problem, and help them tackle it by planning, tracking, and adjusting the plan as necessary. In fact, if we make the steps and adjustments small and fast enough, we could possibly relieve them from a large part of the yoke of budgeting, automating many of the tracking of adoption, and making them fully recognised, adult members of the organisation!

XP is Classic Rock

A while back I had a little fun comparing Agile to Rock’n’Roll. It’s still one of my favourite posts, and after my recent talk on the benefits of TDD, I got the idea that the best follow-up on that is something about the XP practices.

Test Driven Development with Bonnie Riatt

The first artist that came up was Bonnie Riatt. This is mostly because Ron Jeffries has mentioned her a few times on the Scrum Development mailing list, and since that picture above is from his site, I figure I owe it to him. Oh, and it’s pretty good music!

She sings ‘I Will Not Be Broken‘, which is as good a summary of Test First development as one could wish for. And if you take into account lines such as ‘But I know where I’m not going‘, and ‘Pull me round; Push me to the limit’, then it’s perfectly clear we’re going through that TDD process cycle of Red, Green, Refactor in as small a steps as possible. Isn’t it?

Pair Programming with Aerosmith / The Beatles

I already mentioned ‘Come Together‘ in the last post, and to be honest, I can’t think of a better Pair Programming song. It does bring with it some of the oft heard objections to pairing, with ‘Hold you in his arms till you can feel his disease‘ being a succinct summary. These things have to be overcome, but you’ll end up with a classic that is covered by practically everyone. I’m going for the Aerosmith version, as their guitar work shows the advantages of having two great practitioners working together…

A great runner up was ‘Let Me Share The Ride‘, by The Black Crowes. All about how sharing the ride can be done with someone who isn’t a burden…

Refactoring with Eric Clapton

So how about Refactoring? Well, refactoring is all about removing duplication. There are many songs about duplicitive women and men, talking about how they’ve been done wrong, but apart from having a completely different meaning, I’d also have to save those for a special post about management practices. A much more suitable song is the classic ‘Double Trouble‘ blues song, which you can see below in a marvellous version by Eric Clapton together with Steve Winwood. This song fits so well because it reminds the young programmer of the dangers that duplication in code brings. ‘I have no job, laid of and I’m having Double Trouble

Simple Design with The Ramones / The Doors

Simple Design is not simple to do. We all have a strong tendency to try to take into account all kind of possible future scenarios when writing code. So the advice that comes out of the The Doors song ‘Take it as it comes’ is very apt. I’ve selected a cover version by The Ramones here, but the central message is the same: “Take it easy baby, take it as it comes. Don’t move too fast if you want your love to last”. Of course, read ‘code’ for ‘love’  there, but that should be automatic for any kind of real Craftsman…

Collective Code Ownership with The Red Hot Chili Peppers

Moving on from there we go on to the circle that deals with wider team alignment. It would be easy to slip in the ‘Internationale‘ here, but that really doesn’t do this practice justice. Another thought was ‘You Don’t Own Me’ by Dusty Springfield, but it really didn’t fit into the classic rock theme, and is much more about not being allowed to access the… object under discussion.
The answer was, of course, found with the Red Hot Chili Peppers song ‘Give It Away‘! Not only do they  know that sharing the code makes everyone wiser: “Realize I don’t want to be a miser;
Confide with sly you’ll be the wiser”, but they know that this practice is crucial to working Agile:
Lucky me swimmin’ in my ability
Dancin’ down on life with agility

Continuous Integration with Bruce Springsteen

Of course, you can’t have collective ownership without a good Continuous Integration system. This one is easy, ’cause that code is ‘Born to Run’!

Customer Tests with Led Zeppelin

Working closely with your customer is the best way to ensure that you’re building the right thing. And having the customer closely involved with defining the acceptance test is the answer to avoiding the dreaded ‘Communication Breakdown’ that has left so many project is shambles:
Communication breakdown, it’s always the same
Havin’ a nervous breakdown, a-drive me insane

Sustainable Pace with Queen

People who know me know I can’t resist a good Queen song. This one emphasises precisely the opposite of what we want, but a negative test case can be very effective at communicating the desired functionality, can’t it? With ‘The Show Must Go On‘, we are confronted with all the dysfuction we find when teams push too hard to deliver impossible projects. Working in empty offices after everyone else has gone home, trying to find that last bug before it’s ready for production:
Empty spaces – what are we living for
Abandoned places – I guess we know the score
On and on, does anybody know what we are looking for…
The classical heroic programmer, working as an unsung (until now!) hero:
Another hero, another mindless crime
Behind the curtain, in the pantomime
Hold the line, does anybody want to take it anymore
 
And that’s it, for this post. I really wanted to get into the XP Metaphor practice as well, but it ended up with me getting headaches trying to understand the hidden meanings of songs like Stairway to Heaven, and Hotel California. Better not go there…

Success

I recently wrote here about the benefits of failure. One of my recent failures reminded me about the importance of success. I thought that to be nicely circular enough to warrant a new post!

You see, while it’s important to embrace failure – how else are you going to learn? – it is just as important that you don’t set yourself up for failure too often.

One very popular way that agile teams do set themselves up for failure is by taking in too much work in a sprint. True, this wouldn’t be too bad if failure was better accepted: we’d just recognise we were not going to make it, and drop some stories from the sprint. And learn to take in less work for the next sprint. But this is not what happens. What happens is that we try to succeed anyway. And that we don’t stick to our own rules of work when doing so. So we work late (making the whole concept of velocity useless, a way I missed in my previous list of ways to do that), or we reduce quality. Or both, usually.

But here I am getting stuck on failure again. We were talking about success!

When a team is starting out with agile, they are bound to run into many issues. The short feedback cycles ensure that they’ll be confronted with all kinds of ways in which their current process is suboptimal. This is not always a nice experience and if it is not mitigated by successes in improving matters, a team can become discouraged, and let their agile experiment die off.

In the recent situation I was referring to, this is what happened. The team had quite a good grasp of what was wrong in their process. In fact, during our initial ‘Scrum Introduction’ workshop some clear issues came up. We always do such a workshop in the form of an extended retrospective, interlaced with some theory and games. In that retrospective part, one of the top issues that came up was that of team stability. Teams were reconfigured for every project, but people were also regularly reassigned in a running project. Some further questioning quickly unearthed frequent quality issues in both requirements and code, causing people to be needed for projects in trouble after release or in ‘user acceptance testing’.

Note: UAT is not supposed to be a process where your users are the first people to test your code!

So, knowing that the underlying issue was one of quality we started by focusing on quality. Can’t go wrong, right? But the feedback loop from the quality issues to the ‘reassignment’ issue was very long. Also, since the reassignment was to other teams, that were already in trouble, the improvements we could make had no possible influence on the stability of the team. And so, when during a planning meeting for the third sprint the third person was ‘temporarily’ moved to another team, we figured out that we had set this team up to fail.

So what should we have done? ‘We’ being me as a coach, or the team involved in a – management approved – change process. We should have gone to the company management before starting any sprints (so right after that retro/workshop) and asked for their support, guaranteeing that this team would be left intact for the duration of the project. We should at the same time have advised working on the widespread quality issues.  I should add that it would have been exceedingly unlikely that that broader support would have been given, but then at least it would have saved everyone some work and more importantly, disappointment.

The lesson here is that when you find your impediments, you should make them visible to the wider environment, and make sure that you actively work to remove them in as short a time as is possible. This most definitely includes involving management, and will underline their support for the agile transition in progress. Not doing that is denying the team a chance to experience success. And that’s a great way to sabotage your change process.

Technical Excellence: Why you should TDD!

Last Thursday, Januari 19, I gave a short talk at ArrowsGroup’s Agile Evangelists event in Amsterdam. Michiel de Vries was the other speaker, talking about the role of Trust in Agile adoptions. On my recommendation the organisers had changed the format of the evening to include two Open Space Technology sessions, right after each 20 minute talk, with the subject of the talk as its theme. This worked very well, and we had some lively discussions going on after both talks, with the 50 or so attendents talking in four or five groups. I’m happy it worked out, as it was the first time I facilitated an Open Space.

My own talk dealt with Technical Excellence, and in particular with Test Driven Development, and why it’s a good idea to use it. My talk was heavily inspired by James Grenning‘s closing keynote at the Scrum Gathering in London last year. He even allowed me to use some of his sheets, which were very important in getting the argument across. I’ll go through the main points of my talk below.

For me the big challenge was to manage to fit the most important parts of what I wanted to say within the 20 minutes I had available to me. As in software development, it turns out presentation writing is about making the result short, clear, and without duplication. The first attempt at this presentation took about 50 minutes to tell, and subsequent versions got shorter and shorter…

The complete set of slides is here. And this is how the story went:

I quickly introduced the subject by talking about Agile’s rise in popularity ever since the term was introduced ten years ago (and before, really, for the separate methods that already existed). About35% of companies reported using some form of Agile early last year, in a forrester report. Most of those companies are showing positive effects of this adoption. No matter what you think of the quality of investigation of the Standish Report, for their internally consistent measure of project success the improvements over the last ten years have been distinct.

That’s not the whole story, though. Many teams have been finding that their initial success hasn’t lasted. Completely apart from any other difficulties in getting Agile accepted in an organisation, and there are plenty of othere, this particular one is clearly recognisable.

There is a slowing of development. Usually due to having a lot of defects that come out in production, but also because of an increasing fragility of the codebase that means that any change you do can (and will…) have unforseen side-effects. One very frequent indicator that this is happening in your team is the increase in requests for ‘refactoring’ stories from the team. By this they usually mean ‘rework’ stories, where major changes (‘clean-up’) is needed for them to work more comfortably,  and productively, with the code. In this situation the term ‘Technical Debt’ is also becoming a more familiar part of the vocabulary.

And that is why it is time to talk about technical excellence. And why the people who came up with this whole Agile thing were saying last year that encouraging technical excellence is the top priority! OK, they said ‘demand’, but I have less clout than those guys… They said other things, but this was a 20min. presentation, which is already too short for even this one. It is on top, though.

I further emphasised the need with a quote by Jeff Sutherland:

“14 percent, are doing extreme programming practices inside the SCRUM, and there is where we see the fastest teams: using the SCRUM management practice with the XP engineering practices inside.” – Jeff Sutherland, 2011

Since Jeff was nice enough to mention XP, Extreme Programming, this allows us to take a look at what those XP Engineering practices are.

We’re mainly talking about the inner circle in the picture, since those are the practices that are most directly related to the act of creating code. The second circle of Hell^H^H^H^H XP is more concerned with the coordination between developers, while the outer ring deals with coordination and cooperation with the environment.

From the inner circle, I think Test Driven Development is the most important one. The XP practices are designed to reinforce each other. One good way to do Pair Programming, for instance is by using the TDD development cycle to drive changing position: First one developer writes a test, then the other writes a piece of implementation, doeas some refactoring, and writes the next test, after which he passes the keyboard back to the first, who starts through the cycle again. And, since you are  keeping each-other honest, it becomes easier to stick to the discipling of working Test Driven.

TDD is, according to my humble opinion, a little more central than the others, even if only because it is so tightly woven with Refactoring and Simple Design. So let’s talk further about TDD!

Test Driven Development has a few different aspects to it, which are sometimes separately confused with the whole of TDD. I see these aspects:

  • Automated Tests
  • Unit Tests
  • Test-First
  • The Red, Green, Refactor process

I’ve had plenty of situations where people were claiming to do TDD, but it turned out their tests were not written before the production code, and were at a much higher level than you’d want for unit tests. Well, let’s look at those different aspects of TDD, and discuss why each of them is a good idea.

Why Automated Tests?

This is where some of the wonderful slides of James Grenning come in. In fact, for a much better discussion of these graphs, you should read his blog post! If you’re still with me, here’s the short version: Every Sprint you add new functionality, and that new functionality will need to be tested, which takes some fraction of the time that’s needed to implement it. The nasty thing is that while you’re adding new functionality, there is a good chance of breaking some of the existing functionality.

This means that you need to test the new functionality, and *all* functionality built in previous sprints! Testing effort grows with the age of your codebase.

Either you skip parts of your testing (decreasing quality) or you slow down. It’s good to have choices. So what happens is that pretty soon you’re stuck in the ‘untested code gap’, and thus stuck in the eternal firefighting mode that it brings.

 

This seems to be a great time to remind reader that the primary goal of a sprint is to release complete, working, tested software. And if that doesn’t happen, then you will end-up having to do that testing, and fixing, at a later date.

To try to illustrate this point, I’ve drawn this in a slightly suggestive form of a… waterfall!

Why Unit Tests?

So what is the case for unit tests? Let me first assure you that I will be the last to tell you that you shouldn’t have other types of tests! Integration Tests, Performance Test, Acceptance Tests, … And certainly also do manual exploratory testing. You will never think over everything beforehand. But. Those tests  should be added to the solid base of code covered by unit tests.

Combinatorial Complexity

Object Oriented code is built out of objects that can have a variety of inner states, and the interactions between those objects. The possible different states of such interacting components grows huge very quickly! That means that testing such a system from the outside requires very big tests, that would not have easy access to all of the details of all possible states of all objects. Unit Tests, however are written with intimate knowledge of each object and its possible states, and can thus ensure that all possible states of the object are covered.

Fine Grained tests make it easier to pin-point defects

Perhaps obvious, but if you have small tests covering a few lines of code each, when a defect is introduced, the test will accurately point to the defect with a couple of lines of accuracy. An integration or acceptance test will perhaps show the defect, but still leaves tens, hundreds or thousands of lines of code to comb through in search of the defect.

Code documentation that stays in sync

Unit tests document the interfaces of objects by demonstrating how the developer means them to be used. And because this is code, the documentation cannot get out of sync without breaking the build!

Supports changing the code

Unit tests are crucial to your ability to do changes to code. Be they refactoring changes or added functionality, if you don’t have detailed tests they chances of introducing a defect are huge. And if you don’t have those tests, chances are you’ll be finding out about those shiny new defect in production.

Why Test-First?

Ok, so we need unit tests. But why write them before writing the production code? One of the arguments is simply that there’s a good chance you won’t write them if you’ve already written your production code. But let’s pretend, just for a minute, that we’re all very disciplined, and do write those tests. And that we wrote our code in such a way that we can write unit tests for it with a normal level of effort. Time for another of mr Grenning’s sheets!

The full story, again, can be found on his blog, but the short version is as follows. It’s been an accepted fact in software development for quite a long time that the longer the period between introducing a defect and finding out about it, the more expensive it will be to fix it. This is easy to accept intuitively: the programmer won’t remember as well what he was thinking when the error was made; He might have written code in the same general area; Other people might have made changes in that part of the code. (How dare they!)

Now how does Test-First help alleviate this? Simple: If you write the test first, you will notice the defect immediately when you introduce it! Yes, granted, this does not mean that you can’t get any defects into production, there are many different types of bugs. But you’ll avoid a huge percentage of them! Some research on this indicates differences of 50 to 90 percent reduction of bugs.  More links to various studies on TDD can be found on George Dinwiddie’s wiki.

Red, Green, Refactor

And now we get to the last aspect of TDD, as an actual process of writing/designing code. The steps are simple: write a small test (red), that fails; write the simplest code that makes the test pass (green), and then refactor that code to remove duplication and ensure clean, readable code. And repeat!

One thing that is important in this cycle, is that is is very short! It should normally last from 3 to 10 minutes, with most of the time actually spent in the refactoring step. The small size of the steps is a feature, keeping you from the temptation of writing too much production code which isn’t yet tested.

Clean code

Sticking to this process for writing code has some important benifits. Your code will be cleaner. Writing tests first means that you will automatically be writing testable code. Testable code has a lower complexity (see Keith Braithwaite’s great work in this area). Testable code will be more loosely coupled, because writing tests for tightly coupled code is bloody hard. Continuous refactoring will help keep the code tightly cohesive. Granted, this last one is still very much dependent on the quality of the developer, and his/her OO skills. But the process encourages it, and if your code is not moving towards cohesion, you’ll see duplication increasing (for instance in method parameters).

After last weeks presentation, in one of the Open Space discussions some doubt was expressed on whether a lower cyclomatic complexity is always an indication of better code. I didn’t have/take much time to go into that, and accidentally reverted to an invocation of authority figures, but it is a very interesting subject. If you look at the link to Keith Braithwaite’s slides, and his blog, you’ll notice that all the projects he’s discussing have classes with higher and lower complexity. The interesting part is that the distribution differs between code that’s under test and code that isn’t. He also links this to more subjective judgement of the codebases. This is a good indication that testing indeed helps reduce complexity. Complexity is by no means the only measure though, and yes, you can have good code that has a higher complexity. But a codebase that has a tendency to have more lower complexity and less higher complexity classes will almost always be more readable and easier to maintain.

Simple Design

XP tells you You Ain’t Gonna Need It. Don’t write code that is not (yet) absolutely necessary for the functionality that you’re building. Certainly don’t write extra functionality! Writing the tests first keeps you focused on this. Since you are writing the test with a clear objective in mind, it’s much less likely that you’ll go and add this object or that method because it ‘seems to belong there’.

Another phrase that’s popular is Don’t Repeat Yourself. The refactoring step in the TDD process has the primary purpose of removing duplication. Duplication usually means harder to maintain. These things keep your code as lean as is possible, supporting the Simple Design filosophy

Why Not?

So why isn’t everyone in the world working with this great TDD thing I’ve been telling you about? I hear many reasons. One of them is that they didn’t know about is, or were under the misconception that TDD is indeed just Test Automation, or Unit Testing. But in the Internet age, that excuse is a bit hard to swallow.

More often, the first reaction is along the lines of “We don’t have the time”.

I’ve just spent quite a bit of time discussing hoe TDD will normally save you time, so it doesn’t make much sense to spend more time discussing why not having time isn’t an argument.

But there is one part of this objection that does hold: getting started with TDD will take time. As with everything else, there’s a learning curve. The first two or three sprints, you will be slower. Longer, if you have a lot of code that wasn’t written with testing in mind. You can speed things up a bit by getting people trained, and getting some coaches in to help get used to it. But it will still take time.

Then later, it will save you time. And embarrasment.

Another objection I’ve encountered is ‘But software will always contain bugs!’.

Sure. Not as many as now, but I’m sure there will be bugs. Still, that argument makes as much sense to me as saying that we will always have deaths in traffic due to people crossing the road, even if we have traffic lights! So we might as well get rid of all traffic lights?

There will always be some people that fall ill, even if we inoculate most everyone against a disease. So should we stop the inoculations?

Nah.

Uhm… Yeah…

At the same time the strongest and the weakest of arguments. Yes, in many companies there are bosses who will have questions about time spent writing tests.

The thing is: you, as a developer are the expert. You are the one who can tell your boss why not writing the test will be slower. Will slow you down in the future. Why code that isn’t kept clean and well factored will bog you down into the unmaintainability circle of hell until the company decides to throw it all away and start over. And over. And over.

You are also the one that that guy is going to be standing at the desk of, telling you that he’s going to have to ask you to come in on Saturday. And Sunday. And you know what? That’s going to be your own fault for not not working better.

And in the end, if you’re unable to impress them with those realities, use the Law of Two Feet, and change companies. Really. You’ll feel so much better.

This last one is, I must admit, the most convincing of the reasons why people are not TDDing. Legacy code really is hard to test.

There’s some good advice out there on how to get started. Michael Feathers book. Working Effectively With Legacy Code is the best place to start. Or the article of the same name, to whet your appetite. And the new Mikado Method definitely has promise, though I haven’t tried that on any real code yet.

And the longer you wait to get started, the harder it’s going to be…

How?

I closed the presentation with two sheets on how to get to the point where you can deliver at high quality.

For a developer, it’s important to keep learning. There a lot of great books out there. And new ones coming out all the time. But don’t just read. Practice. At Qualogy I’ve been organising Coding Dojos, where we get together and work on coding problems, as an exercise. A lot of learning, but also a lot of fun to do! Of course, we’ve also all had a training in this area, getting Ron Jeffries and Chet Hendrickson over to teach us Agile Development Skills.

Make sure that when you give an estimate, that you include plenty of time for testing. We developers simply have a tendency to overlook this. And when you’re still new to TDD, make a little more room for that. Commit to 30% fewer stories for your next three sprints, while your learning. And when someone tries to exert some pressure to squeeze in just that one more feature, that you know you can only manage by lowering quality? Don’t. Say NO. Explain why you say no, say it well, and give perspective on what you can say yes to. But say no.

For a manager, the most important thing you can do is ensure that you consistently emphasize quality over schedule. For a manager this is also one of the most difficult things to do. We’ve been talking about the discipline required for developers to stick to TDD. This is where the discipline for the manager comes in. It is very easy to let quality slip, and it will happen almost immediately as soon as you start emphasizing schedule. You want it as fast as possible, but only if it’s solid (and SOLID).

You can encourage teams to set their own standards. And the encourage them to stick to them. You’ll know you’ve succeeded when the next time you slip up and put on a little pressure, they’ll tell you “No, we can’t do that and stick to our agreed level of quality.” Congrats:-)

You can send your people to the trainings that they need. Or get help in to coach them (want my number? :-)

You can reserve time for those practice sessions. You will benifit from them, no reason they have to be solely on private time.

Well, it’s nicely consistent. I went over my time when presenting this, and over my target word-count when writing it down. If you stuck with me till this end, thanks! I’m also learning, and short, clear posts are just as hard as short, clear code. Or short, clear presentations.

Failure

Failure is not an option!

I’ve heard that phrase a number of times during my career. Mostly by people imitating a manager they heard in a meeting, but still too often.

What’s remarkable is that this phrase is usually uttered at the moment that it is becoming very clear that, yes, failure seems to not just be an option, but fairly likely!

And that’s the thing. Failure is always an option. The question is, what are you going to do about it?

Failure is always an option

No, let’s step back from that for a minute. First let’s see what we mean by failure. For many project managers, a project is a failure if it’s not delivered on time, and with the complete scope that was agreed. I’ve hurt project manager’s feeling at times by reminding them that according to their own definition, budget was also part of that equation.

A project for which people have had to work overtime, was a failure

Overtime means that you found out there was a problem very late, and at that point did not have a good enough relation with your customer to be able to negotiate release date or scope. That’s a good description of project management failure. Which is not to say that the project manager is the only one that failed, btw. That sort of thing requires a team effort…

Back to the options for failure.

Failure should not just be an option, it’s a requirement. If you go through a whole week of work on your project, and there’s no failures at all, it’s time to get worried. Because either you’re not noticing the failures, or you’re simply not trying hard enough.

You should have found some things that were harder to do than expected. When you do them again, later in the project, they’ll be easier, if you noticed this failure and learn from it. You should notice that your velocity over the last three sprints has been lower (or higher!) than expected. You need to adjust your planning accordingly, and talk to the customer about possible scope or release dates changes. Your continuous build broke five times in the last sprint. You should be talking in the team why that is, and ensure it doesn’t happen again. Your newly written unit test fails. Of course it does, you’re working test driven! Your customer keeps adding to and changing requirements during the sprint. You should make the requirements more explicit before the sprint starts, so that it’s clear the requirements are new.

Failure means learning. Failure to learn could mean… lack of success.

The more you allow failure. The more you encourage failure. The better you will be at detecting failure. The lower the threshold be for your people to report failure. And all that will help you navigate towards success.

Success consists of going from failure to failure without loss of enthusiasm.
— Winston Churchill

On Discipline, Feedback and Management

Change is hard. If we know that about 80% of organisational change programs fail, then it’s easy to appreciate just how hard. Why is that? And, more importantly, what can we do to make it easier?

Recently, I saw a tweet come by from Alan Shalloway. He wrote that, back in the days, people were saying: Waterfall (though they didn’t call it that, back then, I think) is fine, people are just not doing it right. All we need is apply enough discipline, and a little common sense, and it will work perfectly! I think Alan was commenting on an often heard sound in the Agile community about failing Scrum adoptions: You’re just not doing it right! You need to have more discipline!

This is, actually, a valid comment. In fact, both views are valid. On the one hand, just saying that people are not doing it right is not very helpful. Saying you need to be more disciplined is certainly not helpful. (Just look at the success rate of weight loss programs (or abstinence programs against teen pregnancies).  Change is hard, because it requires discipline. Any process requires discipline. The best way to ensure a process is successfully adopted is to make sure the process supports discipline. This is, again, hard.

This post talks about discipling. About how it can be supported by your process. About how it’s often not supported for managers in Agile organisations, and then of course how to ensure that management does get that kind of support in their work.

In Support Of Discipline

Take a look at Scrum, throw in XP for good measure, and let’s have a look at what kind of discipline we need to have, and how the process does (or does not!) support that discipline.

Agile Feedback Loops

Agile Feedback Loops

The eXtreme Programming practices often generate a lot of resistence. Programmers were, and are, very hesitant in trying them out, and some require quite a bit of practice to do well. Still, most of these practices have gained quite a wide acceptance. Not having unit-testing in place is certainly frowned upon nowadays. It may not be done in every team, but at least they usually feel some embarrassment about that. Lack of Continuous Integration is now an indication that you’re not taking things seriously.

Working structurally using TDD, and Pair Programming, have seen slower adoption.

Extreme Programming Practices

Extreme Programming Practices from XProgramming.com

If we look at some practices from Scrum, we can see a similar distribution of things that are popularly adopted, and some things that are… less accepted.  The Daily Stand-Up, for instance can usually be seen in use, even in teams just getting started with Scrum. Often, so it the Scrum Board. The Planning Meeting comes next, but the Demo/Review and certainly the Retrospective, are much less popular.

Why?

All of these practices require discipline, but some require more discipline than others. What makes something require less discipline?

  • It’s easy!
  • Quick Feedback: It obviously and quickly shows it’s worth the effort
  • Shared Responsibility: The responsibility of being disciplined is shared by a larger group

Let’s see how this works for some of the practices we mentioned above. The Daily Stand-Up, for instance, scores high on all three items. It’s pretty easy to do, you do it together with the whole team, and the increased communication is usually immediately obvious and useful. Same for the Scrum Board.

The Planning Meeting also scores on all three, but scores lower for the first two. It’s not all that easy, as the meetings often are quite long, especially at the start. And though it’s obvious during the planning meeting that there is a lot of value in the increased communication and clarity of purpose, the full effect only becomes apparent during the course of the sprint.

The Demo is also not all that easy, and the full effect of it can take multiple sprints to become apparent. Though quick feedback on the current sprint will help the end-result, and the additional communication with stakeholders will benefit the team in the longer run, these are mostly advantages over the longer term. To exacerbate this effect, responsibility for the demo is often pushed to one member of the team (often the scrum master), which can make the rest of the team give it less attention than is optimal.

Retrospectives are, of course, the epitome of feedback. Or they should be. Often, though, this is one of the less successful practices in teams new to Scrum. The reasons for that are surprisingly unsurprising: there is often no follow-up on items brought forward in the retrospective (feedback on the feedback!), solving issues is not taken up as a responsibility for the whole team, and quite a few issues found are actually hard to fix!

The benefits of Continuous Integration are usually quite quickly visible to a development team. Often, they’re visible even before the CI is in use, as many teams will be suffering from many ‘integration issues’ that come out late in the process. It’s not all that hard to set-up, and though one person can set it up, ‘not breaking the build’ is certainly shared by the whole team.

Unit testing can be very hard to get started with, but again the advantages *if* you use it are immediately apparent, and provide value for the whole team. Refactoring. Well, anyone who has refactored a bit of ugly code, can attest how nice it feels to clean things up. In fact, in the case of refactoring the problem is often ensuring that teams don’t drop everything just to go and refactor everything… Still, additional feedback mechanisms, like code statistics using Sonar or similar tools, can help in the adoption of code cleaning.

Pairing shows quick feedback, and is shared by at least one other person, but it’s often very hard, and has to deal with other things: management misconceptions. We’ll talk about that a little later. TDD‘s advantages are more subtle than just testing at all. So the benefit is less obvious and quick. It’s also a lot harder, and has no group support. These practices do reinforce each other, testing, TDD, and refactoring are easier to do if done together, pairing.

So we can see at least some correlation between adoption and the way a practice supports discipline. This makes a lot of sense: The easier something is to do, the more obvious its benefits, and the more shared its burdens by a group, the better is supports its users, and the more it is adopted.

Feedback within the team

One large part of this is: feedback rules. This is not a surprise for most Agile practitioners, as it’s one of the bases of Agile processes. But it is good to always keep in mind: if you want to achieve change, focus on supporting it with some form of feedback. One form of feedback that is used e.g. Scrum and Kanban, is the visualisation of the work. Use of a task board, or a Kanban board, especially a real, physical one, has a remarkable effect on the way people do their work. It’s still surprising to me how far the effects of this simple device go in changing behaviour in a team.

Seeing how feedback and shared responsibility help in adoption of practices within the team, we could look at the various team practices and find ways to increase adoption by increasing the level of feedback, or sharing the responsibility.

The Daily Stand-Up can be improved by emphasising shared responsibility: rotating the chore of updating the burn-down, ensuring that there’s not a reporting-to-scrum-master feel by passing around a token.

The Planning Meeting could be made easier by using something different from Planning Poker if many stories need to be estimated, or this estimation could be done in separate ‘Grooming’ sessions. Feedback could be earlier by ensuring Acceptance Criteria are defined during the planning meeting (or before), so we get feedback for every story as soon as it gets picked up. Or we can stop putting hourly estimates on tasks to make the meeting go by quicker.

Retrospectives could be improved by creating a Big Visual Improvement backlog, and sticking it to the wall in the team room. And by taking the top item(s) from that backlog into the next sprint as work for the whole team to do. If it’s a backlog, we might as well start splitting those improvements up into smaller steps, to see if we can get results sooner.

All familiar advice for anyone that’s been working Agile! But how about feedback on the Agile development process as it is experienced by management?

Agile management practices

Probably the most frequently sited reason for failure of Agile initiatives, is lack of support from management. This means that an Agile process requires changes in behaviour of management. Since we’ve just seen that such changes in behaviour require discipline, we should have a look at how management is supported in that changed behaviour by feedback and sharing of responsibility.

First though, it might be good to inventory what the recommended practices for Agile Managers are. As we’ve seen, Scrum and XP provide enough guidance for the work within the team. What behaviour outside the team should they encourage? There’s already a lot that’s been written about this subject. This article by Lyssa Adkins and Michael Spayd , for instance, gives a high-level overview of responsibilities for a manager in an Agile environment. And Jurgen Apello has written a great book about the subject. For the purposes of this article, I’ll just pick three practices that are fairly concrete, and that I find particularly useful.

Focus on quality: As was also determined to be the subject requiring the most attention at the 2011 reunion of the writing of the Agile Manifesto, technical excellence is a requirement for any team that want to be Agile (or just be delivering software…) for the long term. Any manager that is dealing with an Agile team should keep stressing the importance of quality in all its forms above speed. If you go for quality first, speed will follow. And the inverse is also true: if you don’t keep quality high, slowing down until nothing can get done is assured.

Appreciate mistakes: Agile doesn’t work without feedback, but feedback is pretty useless without experimentation. If you want to improve, you need to try new things all the time. A culture that makes a big issue of mistakes, and focuses on blame will smother an Agile approach very quickly.

Fix impediments: The best way to make a team feel their own (and their works)  importance is to take anything they consider getting in the way of that very seriously. Making the prioritised list of impediments that you as a manager are working on visible, and showing (and regularly reporting) on their progress is a great way of doing that.

Note that I’ve not talked about stakeholder management, portfolio management, delivering projects to planning, or negotiating with customers. These are important, but are more in the area of the Product Owner. The same principles should apply there, though.

Let’s see how these things rate on ease, speed of feedback, and shared responsibility.

Focus on Quality. Though stressing the importance of quality to a team is not all that difficult, I’ve noticed that it can be quite difficult to get the team to accept that message. Sticking to it, and acting accordingly in the presence of outside pressures can be hard to do. Feedback will not be quick. Though there can be improvements noticeable within the team, from the outside a manager will have to wait on feedback from other parties (integration testing, customer support, customers directly, etc.) If the Agile transition is a broad organisational initiative, then the manager can find support and shared responsibility from his peers. If that’s not the case, the pressure from those peers could be in quite a different direction…

Appreciate mistakes. Again, this practice is one in which gratification is delayed. Some experiments will succeed, some will fail. The feedback from the successful ones will have to be enough to sustain the effort. The same remarks as above can be given with regards to the peer support. Support from within a team, though less helpful than from peers, can be a positive influence.

Fix impediments. The type of impediments that end on a manager’s desk are not usually the easy-to-fix ones. Still, the gratification of removing problems, make this a practice well supported by  quick feedback. And there is usually both gratitude from the team and respect for action from peers.

We can see that these management practices are much less supported by group responsibility and quick feedback loops. This is one of the reasons why managers often are the place where Agile transitions run into problems. Not because of lack of will (we all lack sufficient willpower:-), but because any change requires discipline, and discipline needs to be supported by the process

Management feedback

If we see that some important practices for managers are not sufficiently supported by our process, then the obvious question is going to be: how do we create that support?

We’ve identified two crucial aspects of supporting the discipline of change: early feedback, and shared responsibility. Both of those are not natural fits in the world of management. Managers usually do not work as part of a closely knit team. They may be part of a management team, but the frequency and intensity of communication is mostly too low to have a big impact. Managers are also expected to think of the longer term. And they should! But this does make any change more difficult to sustain, since the feedback on whether the change is successful is too far off.

By the way, if that feedback is that slow, the risk is also that change that is not  successful is kept for too long. That might be even worse…

When we talk about scaling Agile, we very quickly start talking about things such as the (much debated) Scrum of Scrums, ‘higher level’ or ‘integration’ stand-up meetings, Communities of Practice, etc. Those are good ideas, but are mostly seen as instruments to scale scrum to larger projects, and align multiple teams to project goals (and each other).  These practices help keep transparency over larger groups, but also through hierarchical lines. They help alignment between teams, by keeping other teams up-to-date of progress and issues, and help arrange communication on specialist areas of interest.

What I’m discussing in this post seems to indicate that those kind of practices are not just important in the scenario of a large project scaled over multiple teams, but are just as important for team working separately and their surrounding context.

So what can we recommend as ideas to help managers get enough feedback and support to sustain an Agile change initiative?

Work in a team. Managers need the support and encouragement (and social pressure…) that comes of working in a team context as much as anyone. This can partly be achieved by working more closely together within a management team. I’ve had some success with a whole C-level management team working as a Scrum team, for instance.

Be close to the work. At the same time, managers should be more directly following the work on the development teams. Note that I said ‘following’. We do not want to endanger the teams self-organization. I think a manager should be welcome at any team stand-up, and should also be able to speak at one. But I do recognise that there are many situations where this has led to problems, such as undue pressure being put on a team. A Scrum of Scrums, of multi-level stand-up approach can be very effective here. Even if there’s only one team, having someone from the team talk to management one level up daily can be very effective.

Visualise management tasks. Managers can not only show support for the transition in this way, they can immediately profit from it themselves. Being very visible with an impediment backlog is a big help, both to get the impediments fixed, and showing that they’re not forgotten. Starting to use a task board (or Kanban, for the more advanced situations) for the management team work is highly effective. And if A3 sheets are being put on the wall to do analysis of issues and improvement experiments, you’re living the Agile dream…

Visualise management metrics. Metrics are always a tricky subject. They can be very useful, are necessary, but can also tempt you to manage them directly (Goodhart’s Law). Still, some metrics are important to managers, and those should be made important to the teams they manage. Visualising is a good instrument to help with that. For some ideas read Esther Derby’s post on useful metrics for Agile teams. Another aspect of management, employee satisfaction, could be continuously visualised with Henrik Kniberg’s Happiness Matrix, or Jim Benson’s refinement of that.