Set-based design in software

Last year at the Lean and Kanban Benelux conference I attended a session by Michael KennedySet-Based Decision Making: Taming System Complexity. Watch that video, where he explains the way that Toyota uses set-based design to be innovative without risk to the schedules of their new product development projects. I thought that was a very interesting subject, and left thinking about some of the questions Kennedy posed on the applicability to software.

Michael Kennedy – Set-Based Decision Making: Taming System Complexity from Agileminds on Vimeo.

Skip forward half a year, and you find me having just read Product Development for the Lean Enterprise, Kennedy’s book that describes the advantages of Set-Based Design, but closely links this to the concept of a knowledge driven organisation. The book is in the form of fiction: a ‘Business Novel’, I think the term is. I’m not overly fond of that format, but must admit that it worked well in this case. For this post, I’ll focus on the treatment of set-based thinking. The book talks just as much or more about the knowledge driven organisation and forms of change management that are compatible with it, but that is material for another time.

What is set-based thinking?

Set-based concurrent engineering, as the book calls it, is distinct from concurrent engineering, where different parts of the product are built concurrently based on clear requirements and specified interfaces. The set-based approach doesn’t just create the separate parts of the product concurrently, it ensures that there are multiple options for the separate parts of the product, and combines those options at as late a stage as possible (responsible?) to form the product.

Why is that a good thing? Well, the book gives a great example of building a new bicycle. A bicycle has various components: a frame, a drive, brakes, suspension, wheel seats, etc. If you are free to combine those parts is various ways, you have many different options of building a bike. That also means that if you develop new versions of all those parts, there are high risks of failing with at least some of them. That means that there has to be a trade-off: innovation against the chance of success for a new product development project.

As you can see in the picture above, the set-based approach avoids this trade-off: by working on different options for each of the parts, the risk of one of the parts not delivering becomes less. If we combine that with an approach where a safe, known to work version of the part is included, then failure of the project becomes very unlikely. This allows you to take more risk, and thus invite more chance of innovation, for the alternative versions of that part.

This is (as is true for large parts of the Lean world, even if we call it ‘knowledge based’ instead of ‘lean’ product development) based on the way that Toyota works in product development. An example from them that Kennedy mentions in the talk above(just in case you haven’t watched that yet) is that of the Toyota Prius. There were a number of innovative components in that car: the hybrid engine, the transmission, etc. Toyota developed multiple versions of those components, where some would have been initially developed for earlier cars, but perhaps had not been included because the technology wasn’t quite ready. Toyota could take risks there, as it had backups for each of those parts from earlier car models. At the same time, the chassis of the car was not changed at all, and was simply the same as for an earlier Camry model.

Deciding which version of a part to use is done using something called Trade-off Curves. These graph the trade-off between various variables for a component’s design, such as fuel-economy and engine price, or for a bicycle road resistance vs. wheel stability vs. tire suspension. This is a very data-driven approach where the component are actually made, in multiple versions, incrementally improved, and tested against predefined criteria. It’s enough to warm my Test Driven heart!

There are some conditions to making this work, of course. This quickly goes into Kennedy’s ideas of the Knowledge Based Organisation, which I’ll leave for a future post. It suffices here to state that next to ‘Set-based Concurrent Engineering’, he has ‘System Designer Entrepreneurial Leadership’, ‘Responsibility based Planning & Control’ and ‘Expert Engineering Workforce’ as pillars of knowledge based engineering.

Software

But how would we approach such a process when we’re dealing with software? Is it even necessary? Software is soft, malleable, and easily changed (Your code is, right?) so maybe it’s not necessary to work in such a way? I actually think it’s not only necessary, but it’s already being done, and at quite a large scale.

Linux distributions

The best examples I can think of that use this type of process for software development come from the Open Source world. The first one that comes to mind is the way Linux distributions are managed. A distribution such as Ubuntu is comprised of many different components (packages). My laptop for example, has 4566 packages installed, and there are many more available. These packages are not all created by the same people. They have different requirements for dependencies, different features, perhaps different alignments with feature-sets of other packages. And different levels of stability.

The people who assemble Ubuntu then have choices to make: Do we use Evolution as a mail client, or Thunderbird? Which version of Thunderbird? Do we need to make adaptations of Thunderbird to work seamlessly with our other packages? And they do. Some of those changes can be big, risky, but bring significant innovation (“Drop Gnome as the desktop environment and use Unity instead”), some are smaller and can more easily be reverted (“Use Empathy instead of Gaim as chat client, with both using the same underlying libraries to support different protocols”). 

In a new release, most packages will be updated, but many changes are small, incremental and safe, while some are larger and bring more innovation. I doubt Mark Shuttleworth has a large set of trade-off curves on his desktop when the next version comes along, but he and the Ubuntu community are certainly making exactly those kind of trade-off choices.

Linux kernel

Another good example is again on the Linux front. The Linux kernel. This process may seem a little more complex, because it operates at a lower software level, but is in many aspects the same as above. In Linus, the kernel has its chief engineer (or ‘System Designer Entrepreneurial Leadership’). Calling the kernel developers an ‘Expert Engineering Workforce’ is also unlikely to trigger much discussion (though on the lkml, anything can trigger discussion). Though the kernel may seem less componentised than a distribution such as Ubuntu, it is in fact decoupled into many parts that can often be independently changed.

In the kernel, we again see set-based work in the way that separate components are sometimes incrementally developed, sometimes unchanged, and sometimes drastically changed or even replaced. Whether a change makes it into the kernel is the choice of Linus, but it is usually made based on discussion, numbers (“this filesystem structure has these performance characteristics, and such-and-such reliability” based on extensive testing of working code), and trust in team members. Only when a change is deemed safe and of high enough quality, it is pulled into the kernel. Then it is tested in integration with other changes before being handed to the outside world.

So we do already have set-based engineering in software. But how can you apply it to your own project, which might be large, or smaller? And what can be our software version of trade-off curves? Can we take advantage of the differences between hardware products and software to improve on this model.

All that in more in the next episode of… Soap!

Actionable Metrics at Organizational Scale

I recently chaired a session on ‘Going from company vision to Actionable Metrics‘ at the Stoos Stampede conference in Amsterdam. In that session I tried to show some ideas on making the link from an overall company vision, through different approaches to achieve that vision, to concrete actionable metrics allowing teams within a company to autonomously pursue steps towards making that vision a reality. I’m not sure I succeeded in all of that in the session, so I’m trying again in this post…

Autonomy

A goal of a lean enterprise is to ensure that the people doing the work have all the information, knowledge and skills necessary to make decisions in their day-to-day work. For a lean knowledge organisation that means that people don’t just need to know their own work field well, they need to be able to relate the decisions they make every day to the longer-term goals and vision of the organisation.

Much has been said about supporting high levels of motivation and customer focus within companies. Especially in larger companies this is quite hard to sustain, which is not surprising with works such as Dan Pink’s Drive emphasising the importance of autonomy for the knowledge worker. Ensuring the right information, and a quick feedback loop for knowledge workers is key to motivated, high performing people.

Networks

Such autonomy can’t easily be achieved in a classically structured hierarchical organisation. The siloes inherent in that type of structure are natural barriers. Barriers to the autonomy of action where the distribution of necessary skills and knowledge over separate departments is an impediment to producing work and serving the customer. Barriers as well to the autonomy of reaction where the feedback loop on whether an action was in any way effective in reaching the goals of the organisation or not is too long, or absent.

An organizational structure much more compatible with that goal of autonomy, is that of a network organisation. The basic concept of a network organisation is that of independently working cross-functional teams that gather each other’s support where necessary but generally are able to make their own decisions. Enabling them to make their own decision is the subject of this post. These are the type of organizations that the Stoos Network is considering as the preferred replacement for today’s dysfunctions.

Actionable Metrics

The Lean Startup concept of Actionable Metrics (in order to create Validated Learning) is a great way to give a team the necessary autonomy to work independently towards the right goals. In a startup those metrics can be very directly linked with the goals of the company. In  larger organisations there is need for a clear link between the overall company vision and Actionable Metrics that are usable on the team level.

An actionable metric is one that ties specific and repeatable actions to observed results. — Ash Maurya, http://www.ashmaurya.com/2010/07/3-rules-to-actionable-metrics/

In this post I’ll be using an Effect Map as the method to link the vision to specific metrics, but other methods exist of course. During the session, Catherine Louis mentioned GQM as a method designed determine which metrics to use. This paper gives some more background on GQM. The GQM method seems mostly concerned with determining the right metric for any given goal or problem, and can as such be very useful within the type of context I’m talking about. Another approach at determining the metrics you need is the A3 method.

The nice thing about Effect Maps is that they are very inclusive, and involve different roles and functions in their creation. This fits well with the multi-functional teams in our target organisation. They’re also easy to scale, using a diverge and merge facilitation process, so you can easily work on this with larger groups with full participation.

We’re on a mission from…

The first point of order is determining why we’re here. Not in a metaphysical way. I don’t really have the patience for that. In a ‘What are we trying to do as a company?’ way.  A company vision and mission statements should provide us with a good starting point here. A vision statement could be “A literate future, ” with a mission statement of “More readers, more books.”

This is of course very generic, and a subsequently generated Effect Map could go all over the place:

Effect Map Example

Effect Map Example

One thing we always need to add to the ‘Why?’ part of the Effect Map is a concrete, measurable, goal. This could be, in this case, encourage people to read more books, going from a current estimate of 100 books in a ‘lifetime’ (30 years, apparently, in the poll we got that figure from), to 1000.

Our company could encourage people to read more books in many different ways. The Effect Map shows various directions: working through publishers, changing business models, working with public libraries, promote reading in schools, making books cheaper, working with writers directly instead of through publishers, and some ways of helping people find the right books through technology.

Since we have a technology company, those last seem more relevant to start with. A larger company would probably start exploring some of the other possibilities as well, and perhaps be able to integrate those with the technology work. That could mean incorporating different sales models into the e-reader software. Or creating a 2nd hand e-book market in there. Or something. Plenty of opportunities!

Getting to actionable metrics

How do we go from such a generic goal (people read 10x more books in their lifetime!) to some actionable metrics that can be used by the multi-functional teams that our network organization comprises of? These teams need to be able to use those metrics in their day-to-day decision making. They need to be able to devise experiments, prioritise work, and navigate towards products and solutions without the type of top-down supervision that characterizes the more traditional organization.

First of all you need a baseline. Say we have a product through which people can read books: e-reader software (I told you we were a tech company). From that software we could gather statistics on the number of books people read. To do this well, we’d probably need to track this relative to how long customers have been using our software so we don’t get skewed figures from early enthousiasm (for instance). The term to look for is cohort testing. In our example, it turns out people are buying, on average, one book every three months. To get to the goal of 10x more books, we should then improve this to 3 books a month! This is already a shorter term, and thus more helpful, goal.

Pirates

To get to more useful figures, we need to turn to Dave McClure’s Pirate Metrics. Pirate Metrics are all about the funnel of attracting customer interest, keeping them, and selling to them: Acquisition, Activation, Retention, Referral and Revenue, or AARRR. Just by looking at customers through this lens gives a useful perspective. Our goal is phrased as getting people to read (on average) 10x more books. This could be approached as a matter of increasing retention (more books per customer), but also as on of Acquisition/Activation (getting more customers). That last one only works if we don’t take them away from other sources of books, of course. Can you think of a way to measure that? Certainly combined with increasing retention it would still give a new positive effect.

This would give us two main variables to pay attention to: Retention and Acquisition. We should, as a matter of course, be paying attention to at least the first three (AAR) of these metrics, and most companies will have a natural tendency to also track the last R… But tracking what the result of specific actions are on Retention and Acquisition should have our focus for now.

Pirate Metrics

Pirate Metrics

Splitting Metrics

But wait! In the Effect Map we had come up with two high-level feature ideas that would help us reach our goals: ‘Social Reading’ and ‘Better Book Recommendations’. Should both these ideas work with the same metrics?

Interesting question. On the one hand, I’d expect to be tracking all the pirate metrics in a well established application. But. The whole idea here is that you focus. So while we should keep a global eye on the whole (I’ll get back to that later), the experiments we’re conducting should focus on the change of a particular (set of) variables.

For our examples:

  • Social Reading – This is mostly about having existing readers getting each other interested in other books. That would be Retention. Secondary would be getting new customers in by sharing outside the app, which would be Referral. It’s important to note that distinction, as this has a direct influence on the priority of hypotheses to try.
  • Recommendations – This is also mostly about Retention. Existing readers should be getting more relevant recommendations, and thus but more books. The second level would be Activation. People who visit our shop already, but haven’t bought anything yet, should also be getting better recommendations and thus be prompted to buy.

This is consistent with the way we defined our goals, focusing on existing readers. That means it gives a decided focus to our development work. Phrasing our goals a little differently might increase our attention to new customer acquisition, but we’re not doing that. Consciously diving down into our metrics makes these kind of choices explicit, and that’s A Good Thing.

Absolutely Relative

So how would our teams take these more metrics towards specific hypotheses? First, we’d establish a baseline for retention. That could be

  • When people buy a book, the average time between this purchase and the previous one is 92 days

Then we can start measuring this over time. A nice, always visible chart on a big screen in the development teams’ rooms would be a great idea.

This is a useful metric, as we can measure it day-by-day. It can also be calculated in time based cohorts, as well as feature based cohorts, so we can compare normal changes over time with changes caused directly by our new features.

Hypothetically Speaking…

Ok, now we can get started. “Social Reading” is quite a broad concept. Our imaginative team of developers and product people can brainstorm-up quite a large cloud of ideas that fall within that scope, and they might have a collective gut-feeling on which ones of those would be most effective. They might have used another Effect Mapping exercise to generate ideas, and dot-voted on the most plausible ones. Or not.

The question they should be asking themselves is:

  • What would be the simplest way, costing the least effort, to prove that this idea can indeed prove effective in decreasing the average time between purchases?

If that’s not what they’re asking, they might as well be asking their company if it was feeling lucky, inc.

So for any ideas they generated, they should be thinking about this questions: how can it help  disprove (or prove) that the “Social Reading” idea is plausible?

From the long list (or effect map) of ideas that they generated (sharing quotes, sharing notes, rating books, publishing ‘reading lists’, embedding shared things on blogs, embedding on facebook or twitter, etc.) they pick one item. In this case that item might be a very basic “If a user can easily share he’s reading a book on twitter, this will trigger a shorter time between purchases”.

Now there are some problems with this one. Most important of all is that we don’t limit our audience, so we don’t know if people receiving the tweet will be existing customers. That’s Ok, though. It simply means we’re also testing for referral. Having an ‘internal’ audience might be more effective. But it would probably require a much larger up-front investment to create a communication channel between just our customers, an as such would be a less efficient way to test the hypothesis.

Another problem might be that we’re not helping the customer to share parts of the book, or anything, so the content of the tweet will probably be unspecific. We want more!

Stop!

Hold on. Take it easy. Hold your horses. We were looking for the simplest way to validate our hypothesis. How did we get into a discussion on all the cool features that should be in there? This feature, that feature, estimations (of both effort and expected value), discussions about opinions about hypotheticals…

If you want to know whether some tweets, to an audience that probably includes some existing customers, about a specific book, have some impact on sales then what you should do is write a few tweets. About some books. With an account that’s probably already there, from one of the people in the team. That probably already has other users of the service among its followers.

We all know that this is what should be done, that this is what the whole Lean Startup idea professes: Do The Simplest Thing. But even (or particularly?) in a bigger enterprise we need to put our money where our mouth is. And more importantly, avoid putting too much money where our mouth is and focus on getting that (in)validation of our most important hypothesis.

Giving the reins to the team

Taking those minimal steps is an important part of the overall process. It also seems to be one of the most difficult parts. Like developers needing time and practice to get used to working in the small steps of Test Driven Development. Like the Product Owners needing practice to split their requirements up into small enough chunks to be practical within a short sprint. Doing the absolute minimum work required to invalidate a hypothesis is probably the most difficult skill (or discipline?) to master from the Lean Startup mindset.

You can’t make it work without, though.

Especially in larger organisations where, by simple imperative of the size of the organisation, the involvement in individual projects, products and teams from the people setting the overall direction is much less than in a small startup!

The collaborative construction of Effect Maps ties together our organisation with a common vision and goal. Our carefully crafted and continuously tuned set of actionable metrics give teams clear direction within their level of influence to achieve.

To ensure that the organisational leadership doesn’t need to feel nervous about progress towards their goals, it is crucial that we fail as fast as is possible. And adjust. And try the next idea.

All Together Now

So organizational leadership can comfortably sleep at night in the knowledge that the full intellect and energy of their entire company is being put to work in the pursuit of truth, happiness and organizational goals while continuously self-correcting by the application of validated learning.

What more could they want?

There is one step still missing in this particular example, though. The metrics gathered for the specific experiments provide the very specific data needed for validated learning on the team level. The broader metrics that those are built on are still necessary for the bigger picture.

In our example that means that the targeted cohort testing done in each team is only one slice of the whole. The same (pirate) numbers are being gathered for a much broader cohort over longer periods of time to check whether the organisation as a whole is on the right track. Since that broader cohort would include the entire customer base, it will capture the combined results of all the teams.

Combining Cohorts

Summary

In this article I’ve tried to illustrate, using a simple example, how longer term organizational goals can be made measurable in the short term, and can be used to provide the direction and purpose for teams to work independently and with full autonomy towards a shared organizational purpose.

Can you capture your organization’s vision in goals? What end-result metrics will you introduce? Can you refrain from cost metrics and focus on new value delivery? Go on. Do it.

On Discipline: Fooling yourself is an important skill!

Discipline is an interesting subject. One that I find myself regularly talking about. Or discussing about.derren brown mind control stunt

In the last year I lost about 20kg of body weight through a combination of diet change and exercise. This apparently give some people the impression that I am very disciplined. I’m not. I do know, however, how to make change easier to absorb. And how to inspect and adapt.

Fooling yourself is an important skill

The best ways to make sure you are able to keep discipline is to make being disciplined easy. And luckily, human beings are exceedingly good at fooling ourselves (click on the picture of Derren Brown, here, to watch a nice youtube clip demonstrating this).

For losing weight, this included:

  • Making sure that what I could eat was as least as nice to eat as what I had to stop eating (more meat, less potato chips)
  • Teaming up with my cousin to make not going to the gym not an option
  • Daily measurements of multiple metrics to get an understanding of progress, and variation
  • A very substantial reduction of insulin use, and improvement in blood sugar values with the corresponding increase in energy levels
  • Setting sensible targets that I adjusted as soon as I reached them.
  • Regular experiments to see what helped (drinking a lot of water), and what didn’t (too much exercise)

It’s just common sense. If you want change, make sure that what you’re changing to is enjoyable, and that your progress is clearly apparent. And a little peer pressure can also be useful.

Many of the practices promoted for agile processes fall in the same mould.

  • Short iterations give a clear sense of purpose for the short term, and a sense of accomplishment when completing them.
  • Visual progress, such as a scrum/task board give a direct view of progress, and the impression of things getting done (plus a little peer pressure…) when everyone on the team gets up and moves post-its a couple of times a day
  • Acceptance Test Driven development gives clear and achievable goals
  • Pair programming helps keeping focus throughout the day
  • Test Driven Development does the same, and again keeps progress visible
  • Retrospectives (and the resulting improvement experiments) give the sense of continuous improvement needed stimulate a positive feeling of support of the surrounding organisation

It might be slightly irreverent to throw all of these things together and talk about ‘fooling ourselves’. Is working in concordance with human needs ‘fooling yourself’? I don’t know, but I rather like the sound-byte:-)

On Effect Mapping and Pirate Metrics

During the Specification by Example training I talked about recently, Gojko Adzic introduced me to Effect Mapping. He’s writing a more extensive booklet on the subject, of which he’s released a beta here. I think this is an excellent tool for exploring goals, opportunities and possible features. It can be used as a tool to generate a backlog of features, as a way to explore possible business hyptheses, and perhaps even as a light-weight way to do strategic management of a company.

But let’s start with a short description (see Gojko’s site or beta booklet for the longer one) of what effect mapping is.

Effect Mapping basics

The basic structure of an effect map is that of a structured Mind-Map. A mind-map is a somewhat hierarchical way to note down ideas related to a central theme.

A Mind Map

A Mind Map

The effect map is a mind-map with a specific structure. The different levels of the mind-map are based around the answers to four questions:

  • Why? (The Goal)
  • Who? (Who can have a role in reaching that goal; Or preventing it)
  • How? (In what way can they help, business activities)
  • What? (What are the concrete software features to make; Or non-software actions to take)

Additional levels can be specific User Stories, tasks, or actions, but that depends more on how you want to organise your backlog. The important thing is that this provides an uninterrupted flow from high-level goal or vision to concrete work.

In this way, effect maps can provide one of the important missing steps in the Agile software development world: how to determine what features provide value supporting specific business goals.

The goal used in the centre of the effect map is supposed to be a measurable goal: we need to know unambiguously when this goal has been reached! Gojko gives a nice overview of the way this can be done, using a lighter-weight version of the way Tom Gilb prescribes for making goals measurable. This involves the scale (thing we’re measuring), meter (way we’ll measure it), benchmark (current state), constraint (minimum acceptable value, break even point), target (what we want it to be).

Effect mapping is not just about what ends up on the diagram, it’s also the process of generating the map. This is, of course, a collaborative approach. Getting the involved people together, and creating and discussing the goal, and the ways to get there. Important is that both business people with decision power as well as subject matter experts and technical people that can know possible solution directions should be present. And yes, they do have time for this. Using diverge and merge, as discussed in my previous post, can be very useful again. There’s more, but it’s not relevant to the rest of my post, about iterating, prioritising, etc. So just go and read the booklet, already.

Our Effect Map

In his booklet, Gojko also links this process to the Lean Startup process of customer development. I think this is a great combination, but I would like to see some tweaks in the type of measurements we use for goals in that case.

Actionable Metrics for Pirates

In his book The Lean Startup, Eric Ries talks extensively about the importance of Actionable Metrics, as opposed to Vanity MetricsAn actionable metric is one that ties specific and repeatable actions to observed results. A Vanity Metric is usually a more generic metric (such as total number of hits on a website, or total number of customers), that is not tied to specific (let alone repeatable) actions. There can be many reasons why those change, and isolating the reason is one part of making metrics actionable.

Dave McClure user the term ‘Pirate Metrics‘ to talk about the most important metrics he sees for organisations:Pirate Flag

A – Acquisition – User is directed to your site;

A – Activation – User signs up or is otherwise engaged;

R – Retention – User keeps coming back, i.e., is engaged over time;

R – Referral – User invites others;

R – Revenue – User pays or is otherwise monetized;

These are also the familiar ‘funnel’ metrics, and the above link to Ash Maurya’s site has much more background on them. When using these metrics, it is recommended to do Cohort testing, so that you can see the different results for different groups of users (again, see Ash Maurya’s site). Doing this such that the source of the (new) users is trackable allows you to identify the best ways of increasing the number of users, without deluding yourself into extrapolating from great growth figures if they’re the result of a single marketing action.

This is probably the only spot where Gojko’s booklet could use some tuning. The Gilbian metrics he uses for his example are functional metrics (SMART, and all that), but not the type of actionable metrics that would fit into the lean startup mold. In this example, we’re talking about reaching a certain number of player (one million), in a certain space of time (6 months), while keeping costs and retention rates to certain limits. Gojko does of course explain how to do this iteratively, taking a partial goal (less player than in the end-goal) and checking at a defined milestone whether the goal was reached. And because we’ve only done one ‘branch’ of the effect map, we do have a specific action to link to the result.

If we were to re-cast this more along the lines of our pirate metrics, we could rephrase the goal as increasing the Acquisition and Activation rates. To be clear: this is a whole other goal! This goal is about a change that will ensure structural growth in the longer term. The goal on 1M extra users could (perhaps) be reached by increasing marketing spending (note that only  operational costs are taken into account in the original example). In that case, the goal would be reached, but given that there is a typical Retention rate of (for example) 6 months, then there would be an equivocal exodus of users after that time. If the company had reacted based on total number of users, this could lead to incorrect actions such as hiring extra people, etc.

If the CEO comes down with the message ‘We want 1 million users!’, and it turns out he wants that in about 6 months time, we can then say, “Ok, that’s about 5500 new users per day, or
a growth rate of 1.6%” (per day, again). Then we can start creating and testing hypotheses in much smaller steps than those 6 or three months. What’s more, by using these metrics (and goals) it should become possible to use the same metrics further out into the effect map.

And if this is something at a larger scale, teams can take some of the higher level hypotheses / business activities and use their own domain knowledge to devise more experiments to find a way to reach those goals. Since churn is included in the figure, this also allows experiments based on increasing retention rates. And since we’re doing cohort testing on pirate metrics, we can know day-to-day and week-to-week whether what we’re doing has the expected results. This extends the set-based design paradigm used in effect mapping to a broader organisation.

Strategy and organisational structure

So by using effect mapping, we can make the relation between high-level goals, stakeholders and intermediate level goals visible, using a collaborative process involving different parts and levels of the organisation. By using a consistent set of (value / end-result focused) metrics that can be applied in both the short and longer term, throughout the organisation, we can enable all levels of the organisation to apply their knowledge and skills to reach those goals. And thus allows for more self-organisation (and experimentation) at all levels…

I recently came across the system of Hoshin Kanri (via Bob Marshall). This has some remarkable similarities to what I’ve been talking about. It’s also a method to ensure policy/goals can be distributed throughout the organisation, with involvement of multiple levels and stakeholders at each step. I’ve not studied it extensively, but to me it does feel like a strictly hierarchical system, and one that is mostly used in large companies with a fairly slow (yearly and half-yearly) cycle time. It is used by Toyota, and is part of Total Quality Control, which is supposed to be “designed to use the collective thinking power of all employees to make their organization the best in its field.” It’s nice to see that everything has already been thought of, and we’re just repeating the progress of the past:-)

The difference with effect mapping is its light-weight focus, and the ease with which effect mapping can more easily be used in less hierarchical organisations. In fact, I think such a system might well be a prerequisite to the type of organisations we’re talking about in light of the Stoos network: “learning networks of individuals creating value”. My recent proposal for a session at the Stoos Stampede was precisely about finding out how we can link an organisation’s vision/mission to goals specific enough that teams can work towards them independently, but open enough that they would not suffocate and entrap those teams. I think this might be one solution to that problem.

Stoos Stampede

Management Innovation, ca. 1972

SilosYesterday, after my brother’s 47th birthday, I was talking with my father. My father is 79, and he has had an interesting professional life. He started out as a catholic priest but, as you could guess from the fact of my existence, at some point figured out that this was not a sustainable career path for him.

While talking, my father touched upon the subject of work, and the importance of working for the right reasons. He discussed people that had found a work/life balance that worked for them, by not focusing on financial rewards, but on the contents of the work, and the need to spend time on their families. Of course he sees that many people, including himself at one point, have manoeuvred themselves in positions where making those choices has become very difficult, if not impossible. Certainly many people now are in positions where two full-time salaries are needed to be able to keep paying the mortgage. But focusing on intrinsic motivation for work is very important for your enjoyment of work, and life.

We also talked about management a bit. In 1972, also the year of my own vintage, my father became director of the social services department (‘Sociale Zaken’) of the city of Haarlem. This organisation of around 250 people was then organised in strict silos, where social workers, legal people, administrative work and other departments worked separately, and with little understanding of each other’s specific challenges. Clients (people who were waiting to hear whether they would be getting social support, usually) regularly had to wait until each of those departments had had their say, often adding up to a 4 month wait before their application was either approved or denied. Meanwhile social workers were reporting their conclusions to those client back in language that  was very hard to understand. All of this resulted in very unhappy clients. And thus for the people that had the direct contact with those client not always being treated in the kindest ways.

My father apparently discussed the use of understandable language extensively with the social workers, at one point demonstrating the problem of jargon by slipping back into some latin phrases

Quidquid recipitur ad modum recipientis recipitur – “Whatever is received is received according to the mode of the receiver.”

I have, having failed spectacularly at the one year of Latin I had in school, no idea if this was the actual phrase he used. But it does seem apt.

He also instigated a reorganisation of the service. The reorganisation moved from the siloed system to one where different geographical parts of the city were going to be services by multi-functional teams, consisting of social workers, legal experts, administrative workers and directly client-facing people. The goal was to reduce time wasted with hand-overs, breed understanding for people with another area of expertise, and create more understanding for the clients predicaments.

This move was not welcomed by the employees. So much so that unions were involved, strikes threatened, and silo walls reinforced. In the end, it happened anyway. The arguments were solid, and even with unions involved, there was a rather clear power structure in place. And eventually, when the dust had settled down, the results were very positive indeed. Client satisfaction was up and response times were down (I think it went from roughly 4 months to two weeks). But just as important, internal strife was removed to a large extend, employees were picking up skills (and work!) outside of their area of expertise, and employee satisfaction was considerably better.

Now for me, this is not surprising. It’s not surprising because I know and have lived the results of similar ways of working in Agile teams.

Or maybe I’ve been doing that because, in the end, some values are instilled at an early age, and I had a great example.

The Strategic Inflection Point as a Special Case Pivot

I’ve noticed that I very regularly get people visiting my blog through a Google search for the term ‘Strategic Inflection Point’. Since that term has some very direct connections to other concepts I’ve been learning about, I thought I’d give some detail on Strategic Inflection Points, and their relation to the Lean Startup ideas of Pivots and Pirate Metrics.

I once reported on a presentation by Mary Poppendieck at the Lean and Kanban conference in Antwerp of 2010, where she mentions Andy Grove‘s book ‘Only the paranoid survive‘. This 1999 book deals with the way Grove ran Intel, and includes the concept of the Strategic Inflection Point, also described by Grove in his 1998 speech at the Academy of Management.

Strategic Inflection Point, copied from Mary's sheets

Grove describes the Inflection Point:

A Strategic Inflection Point is that which causes you to make a fundamental change in business strategy.

For anyone who’s been keeping up with the discussion on the Lean Start-up will see the similarity with Eric Ries’ concept of the Pivot:

“A structured course correction designed to test a new fundamental hypothesis about the product, strategy, and engine of growth.” – Eric Ries, The Lean Startup

The language is a bit different, of course, but there are obvious places of overlap. Grove’s Inflection Point has a main emphasis on external factors changing, necessitating change from the side of the company.

A major change due to introduction of new technologies. A major change due to the introduction of a different regulatory environment. The major change can be simply a change in the customers’ values, a change in what customers prefer.

A Pivot can be seen as a reaction to encountering a Strategic Inflection Point. Pivots also happen in search of the correct strategy, product of market fit, which makes them more active and Ries’ structured approach to identifying the need for a pivot seems to be exactly what Grove is looking for.

The biggest difficulty with Strategic Inflection Points is telling one from the many changes that impinge on you in the business. How do you know if a change is just a garden variety change or qualified to be this monumental, catastrophic change category that we call an Inflection Point? I could never come up with a particularly satisfactory answer to that question. And I’m quite sure that if I could, I wouldn’t be talking about them. A set of principles along those lines would be extremely helpful as a competitive weapon because we deal with changes every day.

Ries’ innovation accounting provides the tools necessary to identify that Inflection Point. Pirate Metrics, as coined by Dave McClure,  can provide an early warning that the strategy for a product is not, or no longer, working. And there are some different variants possible for different market situations.

Innovation accounting is the structured approach of doing cohort testing to determine the viability of a product’s engine of growth. This is not specific to a start-up. You can track Acquisition, Activation, Retention, Referral and Revenue for any product, whether in a small start-up trying to find the right product and sales model or an existing company that needs to track the health of existing products and models.

So if you’re worried about running into a Strategic Inflection Point for your business, this is what you do: start keeping (actionable) metrics on how your products are doing, what the growth rates are for their engines of growth, make those numbers clear and visible for everyone in your company, from the C-suite to the janitors, and if you do see a need to pivot, do so with clearly defined hypotheses, and check the results.

 

 

 

 

Turning it up to 11

Turning It Up To 11It’s odd how I’ve been unable to be very consistent in my subject-matter for this blog. I tend to hop around, going from very technical subject to very organisational ones. Some might see this as lacking focus. Maybe that’s true. I’ve never been able to separate execution from organisation and vision very well. To me they seem intrinsically linked. It’s comforting to me that even such luminaries as Kent Beck also seem to see things in this light.

If I look at my bifurcated (tri-? n-?) interests, I see a striking resemblance in the states of technological, managerial and commercial maturity in the world. In all of these areas, the state of affairs is abysmal. In all three areas, we seem to have recognised that this is the case. In all three, though, most people performing those roles are so used to the current state that only rarely do they see that a different approach could bring improvement. Could turn their work ‘up to 11’. There are some differences, though.

Technology

On the technology side, we’ve pretty much identified what works, and what doesn’t. Basically, XP got things right. Others before that also hit the right spot, but we know a mature team sticking to XP practices will not mess things up beyond salvation. If we compare that approach to what one finds in the run-of-the-mill waterfall situation, the differences are so great that there is truly no comparison. There are other questions still at least partially open, but most of those are concerned with scale, organisation, and finding out what should be built. And thus belong in the other categories. The main challenge is one of education. And, granted, a bit of proselytising.

Commercial

More commercial questions are less clear-cut, at least for me. In my work I’ve very rarely seen commercial, product development and marketing decisions taken with anything resembling a structured approach of any kind of rigour. A business case, if one is available at al is often only superficial, and almost never comes with any defined metrics and decision moments. The Lean Start-up movement is the only place I’ve seen that is trying to improve that. Taking this approach out of the start-up and into all the product development and marketing departments in the world is going to take a while, but it will happen. If only because companies capable of doing that will completely out-perform the ones that don’t.

I don’t think the case here is as clear cut as on the technology side, but we have a start. The principles of the Lean Start-up are based on the same ideas as Agile development: know what you want the result to be (validated learning) and iterate using short feedback loops. What to do, exactly, in those feedback loops is known for some types of learning, in some situations, but we’re still working on expanding our knowledge and skills in this area.

Management

As the solutions for the commercial and technical sides of things are rooted in experimentation and short feedback cycles, one might assume that the same would be true for the management side of things. And it’s true that those techniques have value in management in many situations. Many of the ideas on management are based on feedback cycles, Lean/Deming’s PDCA is one, for instance, but Cynefin‘s way of dealing with systems in the complex area is another. But we do seem to have many different ideas about how management should be done, how organisations should be structured and what gives people the best environment to work in.

One place where some of these ideas have gotten together is the Stoos Network. It’s interesting because of the different backgrounds of the people involved: Agile, Beyond Budgeting, Radical Management, Industry Leaders. Their initial gettogether this year resulted in a shared vision, with again an emphasis on learning.

“Organizations can become learning networks of individuals creating value, and the role of leaders should include the stewardship of the living rather than the management of the machine.” — Stoos Communique

This clearly expresses some of the shared values of the Stoos people, but still leaves quite a lot to the imagination. The people and ideas involved are interesting enough that I’ve volunteered to help organise one of the follow-up meetings,  the ‘Stoos Stampede’, which takes place in Amsterdam, 6 and 7 July.

Next to Stoos, as I said before, there are many ideas on how to change management. Lean has had an impact, but though the Toyota Way certainly does talk about people and how to support them in an organisation, this is not the prime focus of most Lean implementations. CALM has started talking about combining Complexity, Agile and Lean ideas, but so far has also not posted any results.  We’re still a bit lost at sea, here.

So what would we need from a new management philosophy?

  • We’d need to know how to structure an organisation. Stoos clearly think the current semi-hierarchical default is not workable for the future, or at the very least severely suboptimal. But what do ‘learning networks’ look like? And how do we grow them?
  • We’d need to know how to provide the organisation with a purpose. A Mission, a Vision, a Goal. Whatever you want to call it. Most organisations do have some sort of mission statement, but it is usually so far removed from the everyday practice of everyone working within the organisation that it might as well be absent.
  • We’d need to know how to connect that purpose to the rest of the organisation. How do we link the work of everyone in the organisation to its stated purpose? If the mission is specific this should be possible. But if we connect the work too tightly, it could be stifling.
  • We’d need to know how to connect the organisation with its customers, its suppliers, its partners. This would be different out of necessity, as the structure of the organisation itself is different. It would also be different out of philosophy, as those relations take on different meaning is the goals of the organisation outside of the monetary rise in importance.
  • We’d need to know how to align such organisations with the demands of the outside marketplace and governance. If the organisation is more oriented towards longer term viability and purposeful behaviour, this might have a good long term effect on profitability, but will certainly in the short term have a different financial behaviour. And budgeting and bookkeeping are areas that need very specific attention with an eye on the external rules these subjects need to comply with.

But apart from what new management would do to the idea of an organisation, there are also questions related more to the question of how to get there from here. Why would current managers want to change their organisations? Why would they want to change so drastically? There are plenty of reasons, but would they be convincing to the current CxO? What would they need to learn to be able to execute on such a vision? Will everyone enjoy working in these kinds of more empowering organisations, or will some people prefer something more hierarchical?

All of these things I want to know. Some of them we’ll discuss during the Stoos Stampede (propose a subject to discuss!), but personally I think we’re still at the very earliest stages of this particular change. In the mean time, we do have a few good examples, and some patterns that seem to work, and I’m going to try and get a few more organisations turned up to 11.

Success

I recently wrote here about the benefits of failure. One of my recent failures reminded me about the importance of success. I thought that to be nicely circular enough to warrant a new post!

You see, while it’s important to embrace failure – how else are you going to learn? – it is just as important that you don’t set yourself up for failure too often.

One very popular way that agile teams do set themselves up for failure is by taking in too much work in a sprint. True, this wouldn’t be too bad if failure was better accepted: we’d just recognise we were not going to make it, and drop some stories from the sprint. And learn to take in less work for the next sprint. But this is not what happens. What happens is that we try to succeed anyway. And that we don’t stick to our own rules of work when doing so. So we work late (making the whole concept of velocity useless, a way I missed in my previous list of ways to do that), or we reduce quality. Or both, usually.

But here I am getting stuck on failure again. We were talking about success!

When a team is starting out with agile, they are bound to run into many issues. The short feedback cycles ensure that they’ll be confronted with all kinds of ways in which their current process is suboptimal. This is not always a nice experience and if it is not mitigated by successes in improving matters, a team can become discouraged, and let their agile experiment die off.

In the recent situation I was referring to, this is what happened. The team had quite a good grasp of what was wrong in their process. In fact, during our initial ‘Scrum Introduction’ workshop some clear issues came up. We always do such a workshop in the form of an extended retrospective, interlaced with some theory and games. In that retrospective part, one of the top issues that came up was that of team stability. Teams were reconfigured for every project, but people were also regularly reassigned in a running project. Some further questioning quickly unearthed frequent quality issues in both requirements and code, causing people to be needed for projects in trouble after release or in ‘user acceptance testing’.

Note: UAT is not supposed to be a process where your users are the first people to test your code!

So, knowing that the underlying issue was one of quality we started by focusing on quality. Can’t go wrong, right? But the feedback loop from the quality issues to the ‘reassignment’ issue was very long. Also, since the reassignment was to other teams, that were already in trouble, the improvements we could make had no possible influence on the stability of the team. And so, when during a planning meeting for the third sprint the third person was ‘temporarily’ moved to another team, we figured out that we had set this team up to fail.

So what should we have done? ‘We’ being me as a coach, or the team involved in a – management approved – change process. We should have gone to the company management before starting any sprints (so right after that retro/workshop) and asked for their support, guaranteeing that this team would be left intact for the duration of the project. We should at the same time have advised working on the widespread quality issues.  I should add that it would have been exceedingly unlikely that that broader support would have been given, but then at least it would have saved everyone some work and more importantly, disappointment.

The lesson here is that when you find your impediments, you should make them visible to the wider environment, and make sure that you actively work to remove them in as short a time as is possible. This most definitely includes involving management, and will underline their support for the agile transition in progress. Not doing that is denying the team a chance to experience success. And that’s a great way to sabotage your change process.

On Discipline, Feedback and Management

Change is hard. If we know that about 80% of organisational change programs fail, then it’s easy to appreciate just how hard. Why is that? And, more importantly, what can we do to make it easier?

Recently, I saw a tweet come by from Alan Shalloway. He wrote that, back in the days, people were saying: Waterfall (though they didn’t call it that, back then, I think) is fine, people are just not doing it right. All we need is apply enough discipline, and a little common sense, and it will work perfectly! I think Alan was commenting on an often heard sound in the Agile community about failing Scrum adoptions: You’re just not doing it right! You need to have more discipline!

This is, actually, a valid comment. In fact, both views are valid. On the one hand, just saying that people are not doing it right is not very helpful. Saying you need to be more disciplined is certainly not helpful. (Just look at the success rate of weight loss programs (or abstinence programs against teen pregnancies).  Change is hard, because it requires discipline. Any process requires discipline. The best way to ensure a process is successfully adopted is to make sure the process supports discipline. This is, again, hard.

This post talks about discipling. About how it can be supported by your process. About how it’s often not supported for managers in Agile organisations, and then of course how to ensure that management does get that kind of support in their work.

In Support Of Discipline

Take a look at Scrum, throw in XP for good measure, and let’s have a look at what kind of discipline we need to have, and how the process does (or does not!) support that discipline.

Agile Feedback Loops

Agile Feedback Loops

The eXtreme Programming practices often generate a lot of resistence. Programmers were, and are, very hesitant in trying them out, and some require quite a bit of practice to do well. Still, most of these practices have gained quite a wide acceptance. Not having unit-testing in place is certainly frowned upon nowadays. It may not be done in every team, but at least they usually feel some embarrassment about that. Lack of Continuous Integration is now an indication that you’re not taking things seriously.

Working structurally using TDD, and Pair Programming, have seen slower adoption.

Extreme Programming Practices

Extreme Programming Practices from XProgramming.com

If we look at some practices from Scrum, we can see a similar distribution of things that are popularly adopted, and some things that are… less accepted.  The Daily Stand-Up, for instance can usually be seen in use, even in teams just getting started with Scrum. Often, so it the Scrum Board. The Planning Meeting comes next, but the Demo/Review and certainly the Retrospective, are much less popular.

Why?

All of these practices require discipline, but some require more discipline than others. What makes something require less discipline?

  • It’s easy!
  • Quick Feedback: It obviously and quickly shows it’s worth the effort
  • Shared Responsibility: The responsibility of being disciplined is shared by a larger group

Let’s see how this works for some of the practices we mentioned above. The Daily Stand-Up, for instance, scores high on all three items. It’s pretty easy to do, you do it together with the whole team, and the increased communication is usually immediately obvious and useful. Same for the Scrum Board.

The Planning Meeting also scores on all three, but scores lower for the first two. It’s not all that easy, as the meetings often are quite long, especially at the start. And though it’s obvious during the planning meeting that there is a lot of value in the increased communication and clarity of purpose, the full effect only becomes apparent during the course of the sprint.

The Demo is also not all that easy, and the full effect of it can take multiple sprints to become apparent. Though quick feedback on the current sprint will help the end-result, and the additional communication with stakeholders will benefit the team in the longer run, these are mostly advantages over the longer term. To exacerbate this effect, responsibility for the demo is often pushed to one member of the team (often the scrum master), which can make the rest of the team give it less attention than is optimal.

Retrospectives are, of course, the epitome of feedback. Or they should be. Often, though, this is one of the less successful practices in teams new to Scrum. The reasons for that are surprisingly unsurprising: there is often no follow-up on items brought forward in the retrospective (feedback on the feedback!), solving issues is not taken up as a responsibility for the whole team, and quite a few issues found are actually hard to fix!

The benefits of Continuous Integration are usually quite quickly visible to a development team. Often, they’re visible even before the CI is in use, as many teams will be suffering from many ‘integration issues’ that come out late in the process. It’s not all that hard to set-up, and though one person can set it up, ‘not breaking the build’ is certainly shared by the whole team.

Unit testing can be very hard to get started with, but again the advantages *if* you use it are immediately apparent, and provide value for the whole team. Refactoring. Well, anyone who has refactored a bit of ugly code, can attest how nice it feels to clean things up. In fact, in the case of refactoring the problem is often ensuring that teams don’t drop everything just to go and refactor everything… Still, additional feedback mechanisms, like code statistics using Sonar or similar tools, can help in the adoption of code cleaning.

Pairing shows quick feedback, and is shared by at least one other person, but it’s often very hard, and has to deal with other things: management misconceptions. We’ll talk about that a little later. TDD‘s advantages are more subtle than just testing at all. So the benefit is less obvious and quick. It’s also a lot harder, and has no group support. These practices do reinforce each other, testing, TDD, and refactoring are easier to do if done together, pairing.

So we can see at least some correlation between adoption and the way a practice supports discipline. This makes a lot of sense: The easier something is to do, the more obvious its benefits, and the more shared its burdens by a group, the better is supports its users, and the more it is adopted.

Feedback within the team

One large part of this is: feedback rules. This is not a surprise for most Agile practitioners, as it’s one of the bases of Agile processes. But it is good to always keep in mind: if you want to achieve change, focus on supporting it with some form of feedback. One form of feedback that is used e.g. Scrum and Kanban, is the visualisation of the work. Use of a task board, or a Kanban board, especially a real, physical one, has a remarkable effect on the way people do their work. It’s still surprising to me how far the effects of this simple device go in changing behaviour in a team.

Seeing how feedback and shared responsibility help in adoption of practices within the team, we could look at the various team practices and find ways to increase adoption by increasing the level of feedback, or sharing the responsibility.

The Daily Stand-Up can be improved by emphasising shared responsibility: rotating the chore of updating the burn-down, ensuring that there’s not a reporting-to-scrum-master feel by passing around a token.

The Planning Meeting could be made easier by using something different from Planning Poker if many stories need to be estimated, or this estimation could be done in separate ‘Grooming’ sessions. Feedback could be earlier by ensuring Acceptance Criteria are defined during the planning meeting (or before), so we get feedback for every story as soon as it gets picked up. Or we can stop putting hourly estimates on tasks to make the meeting go by quicker.

Retrospectives could be improved by creating a Big Visual Improvement backlog, and sticking it to the wall in the team room. And by taking the top item(s) from that backlog into the next sprint as work for the whole team to do. If it’s a backlog, we might as well start splitting those improvements up into smaller steps, to see if we can get results sooner.

All familiar advice for anyone that’s been working Agile! But how about feedback on the Agile development process as it is experienced by management?

Agile management practices

Probably the most frequently sited reason for failure of Agile initiatives, is lack of support from management. This means that an Agile process requires changes in behaviour of management. Since we’ve just seen that such changes in behaviour require discipline, we should have a look at how management is supported in that changed behaviour by feedback and sharing of responsibility.

First though, it might be good to inventory what the recommended practices for Agile Managers are. As we’ve seen, Scrum and XP provide enough guidance for the work within the team. What behaviour outside the team should they encourage? There’s already a lot that’s been written about this subject. This article by Lyssa Adkins and Michael Spayd , for instance, gives a high-level overview of responsibilities for a manager in an Agile environment. And Jurgen Apello has written a great book about the subject. For the purposes of this article, I’ll just pick three practices that are fairly concrete, and that I find particularly useful.

Focus on quality: As was also determined to be the subject requiring the most attention at the 2011 reunion of the writing of the Agile Manifesto, technical excellence is a requirement for any team that want to be Agile (or just be delivering software…) for the long term. Any manager that is dealing with an Agile team should keep stressing the importance of quality in all its forms above speed. If you go for quality first, speed will follow. And the inverse is also true: if you don’t keep quality high, slowing down until nothing can get done is assured.

Appreciate mistakes: Agile doesn’t work without feedback, but feedback is pretty useless without experimentation. If you want to improve, you need to try new things all the time. A culture that makes a big issue of mistakes, and focuses on blame will smother an Agile approach very quickly.

Fix impediments: The best way to make a team feel their own (and their works)  importance is to take anything they consider getting in the way of that very seriously. Making the prioritised list of impediments that you as a manager are working on visible, and showing (and regularly reporting) on their progress is a great way of doing that.

Note that I’ve not talked about stakeholder management, portfolio management, delivering projects to planning, or negotiating with customers. These are important, but are more in the area of the Product Owner. The same principles should apply there, though.

Let’s see how these things rate on ease, speed of feedback, and shared responsibility.

Focus on Quality. Though stressing the importance of quality to a team is not all that difficult, I’ve noticed that it can be quite difficult to get the team to accept that message. Sticking to it, and acting accordingly in the presence of outside pressures can be hard to do. Feedback will not be quick. Though there can be improvements noticeable within the team, from the outside a manager will have to wait on feedback from other parties (integration testing, customer support, customers directly, etc.) If the Agile transition is a broad organisational initiative, then the manager can find support and shared responsibility from his peers. If that’s not the case, the pressure from those peers could be in quite a different direction…

Appreciate mistakes. Again, this practice is one in which gratification is delayed. Some experiments will succeed, some will fail. The feedback from the successful ones will have to be enough to sustain the effort. The same remarks as above can be given with regards to the peer support. Support from within a team, though less helpful than from peers, can be a positive influence.

Fix impediments. The type of impediments that end on a manager’s desk are not usually the easy-to-fix ones. Still, the gratification of removing problems, make this a practice well supported by  quick feedback. And there is usually both gratitude from the team and respect for action from peers.

We can see that these management practices are much less supported by group responsibility and quick feedback loops. This is one of the reasons why managers often are the place where Agile transitions run into problems. Not because of lack of will (we all lack sufficient willpower:-), but because any change requires discipline, and discipline needs to be supported by the process

Management feedback

If we see that some important practices for managers are not sufficiently supported by our process, then the obvious question is going to be: how do we create that support?

We’ve identified two crucial aspects of supporting the discipline of change: early feedback, and shared responsibility. Both of those are not natural fits in the world of management. Managers usually do not work as part of a closely knit team. They may be part of a management team, but the frequency and intensity of communication is mostly too low to have a big impact. Managers are also expected to think of the longer term. And they should! But this does make any change more difficult to sustain, since the feedback on whether the change is successful is too far off.

By the way, if that feedback is that slow, the risk is also that change that is not  successful is kept for too long. That might be even worse…

When we talk about scaling Agile, we very quickly start talking about things such as the (much debated) Scrum of Scrums, ‘higher level’ or ‘integration’ stand-up meetings, Communities of Practice, etc. Those are good ideas, but are mostly seen as instruments to scale scrum to larger projects, and align multiple teams to project goals (and each other).  These practices help keep transparency over larger groups, but also through hierarchical lines. They help alignment between teams, by keeping other teams up-to-date of progress and issues, and help arrange communication on specialist areas of interest.

What I’m discussing in this post seems to indicate that those kind of practices are not just important in the scenario of a large project scaled over multiple teams, but are just as important for team working separately and their surrounding context.

So what can we recommend as ideas to help managers get enough feedback and support to sustain an Agile change initiative?

Work in a team. Managers need the support and encouragement (and social pressure…) that comes of working in a team context as much as anyone. This can partly be achieved by working more closely together within a management team. I’ve had some success with a whole C-level management team working as a Scrum team, for instance.

Be close to the work. At the same time, managers should be more directly following the work on the development teams. Note that I said ‘following’. We do not want to endanger the teams self-organization. I think a manager should be welcome at any team stand-up, and should also be able to speak at one. But I do recognise that there are many situations where this has led to problems, such as undue pressure being put on a team. A Scrum of Scrums, of multi-level stand-up approach can be very effective here. Even if there’s only one team, having someone from the team talk to management one level up daily can be very effective.

Visualise management tasks. Managers can not only show support for the transition in this way, they can immediately profit from it themselves. Being very visible with an impediment backlog is a big help, both to get the impediments fixed, and showing that they’re not forgotten. Starting to use a task board (or Kanban, for the more advanced situations) for the management team work is highly effective. And if A3 sheets are being put on the wall to do analysis of issues and improvement experiments, you’re living the Agile dream…

Visualise management metrics. Metrics are always a tricky subject. They can be very useful, are necessary, but can also tempt you to manage them directly (Goodhart’s Law). Still, some metrics are important to managers, and those should be made important to the teams they manage. Visualising is a good instrument to help with that. For some ideas read Esther Derby’s post on useful metrics for Agile teams. Another aspect of management, employee satisfaction, could be continuously visualised with Henrik Kniberg’s Happiness Matrix, or Jim Benson’s refinement of that.