From Here to Continuous Delivery

Situation Normal

There’s a clear pattern for software development. A pattern of lost opportunity.

In most, if not all, places where I’m called in the base question deals with the inability to deliver. Management sees that the plans they have are simply not going to be realised.

Business opportunities are lost waiting. Waiting for the next available spot in the product roadmap. Waiting for the development team to finish ‘stabilizing’ the system. Waiting for lengthy ‘refactoring’ phase to complete. Waiting for new servers to be delivered (in only six weeks!). Waiting for a PMO organisation to complete project initiation and relative priority. Waiting for development to complete coding. Waiting for a testing phase to complete. Waiting for management to analyse long lists of known issues and risks so they can decide whether a release is possible.

Anyway, there’s waiting involved.

As with any interesting problem, this one doesn’t have a single identifiable cause. The marketing department will blame the development team for being too slow. The development team will blame the marketing team for not knowing what they want. The development manager will blame his team for being slow and writing buggy code and all his stakeholders for not being realistic. Upper management will blame the marketing and development managers for being slow and not delivering.

They are, all of them, right.

And you can’t point a finger at one root cause. Yes, development messed up and wrote crappy, unmaintainable code. Yes, the business focused too much on short term gains and put too much pressure on development to deliver early. Yes, management should have focused on mission and strategy so the rest of the company could have managed scope. Yes, all were too hasty hiring new people when the pressure was on, and too much incompetence entered the company.

I’m willing to bet you’ve heard most of those complaints, made a few of them, and been the subject of others.

How do we get out of this vicious circle?

Step one: Fix execution

Stop. You’re trying to do too many things at once. The first thing that needs to be done is to get your technical house in order. As long as you can’t deliver a working, tested, system at the drop of a hat, you’ll always be too slow.

So now, immediately, start changing your technical practices to support better quality. Deploy automatically, Test automatically, test everything, and test in all manner of ways. Change your architecture to support quick change and better practices. And do all that while still delivering value.

That sounds difficult. It is. But it’s possible. I’ve done it. Others have.

You will slow down a little, initially. But you can use architectural changes, such as a Strangler Pattern or Branch by Abstraction, to quickly start over without throwing all your existing systems away.

And focus on quality. This is hard. Management needs to be extremely explicit in this. Technical teams are used to a focus on progress and speed, and will automatically revert to those and subvert the quality of your system in any case of perceived pressure.

Focus on quality

Add all the elements that give you more control over your systems. That means fully automated deployment, include the automated testing that you need to be confident enough to have every push of a developer going to production. Then make sure that actually happens.

Introduce feature toggles, so that your decision to supply a new feature to end-users becomes exactly that: a decision. And not related to your release cycle.

Measure everything.

Measure the results of new functionality on your business. Whether it’s in use of features for an internal system, or all your Pirate Metrics for your product website.

Step two: Fix alignment

Now that you’re able to deliver, it’s time to start making use of the opportunities that gives you.

You already know, now, how to measure the effects new features have on the use of your product. For some, that is already a direct link to the money being made by the product. For the funnel on a product website or web-shop, we can calculate the revenue increase (or decrease) from a change. Other types of applications need a little more work and imagination, but we can certainly get to some measure of value.

If you can’t measure the effects of your work, you can be sure you are not doing the right things.

But that is all still a bottom-up approach. Effective for short term goals, but potentially dangerous if the metrics we use aren’t aligned with longer term business goals.

im_example

If you have your mission and vision defined for you company, and there’s a strategy that you expect will bring you there, you should now spend some time to hammering out a small set of actionable metrics that we can use to prioritize our opportunities on a day-to-day basis. The post linked to above shows a way to determine that, based on Gojko Adzic’s excellent Impact Mapping.

You pick a very few metrics. In fact, you should aim for that One Metric That Matters. This is your compass, steering the whole of the company. Be careful that you don’t have other metrics hidden that undermine this, for instance in a target/bonus system.

dashboard-metrics

This OMTM should be permanently visible for everyone. It should be continuously, and automatically measured and updated. And it should be directly coupled to the various day-to-day activities of everyone in your company.

On the level of product development, all your priorities should be determined by the impact on your OMTM.

For most companies, this level of focus and clarity of purpose is far off. It requires clear vision and leadership. And will transform your organisation.

When are you getting started?

Scaling Agile with Set-Based Design

I wrote a while back about set-based design, and just recently about a way to frame scaling Agile as a mostly technical consideration. In this post I want to continue with those themes, combining them in a model for scaled agile for production and research.

Scale

In the previous post, we found that we can view scale as a function of the possibilities for functional decomposition, facilitated by a strong focus on communication through code (customer tests, developer tests, simple design, etc.)

This will result in a situation where we have different teams working on different feature-areas of a product. In many cases there will be multiple teams working within one feature area, which can again be facilitated through application of well known design principles, and shared code ownership.

None of this is very new, and can be put squarely in the corner of the Feature Team way of working. It’s distinguished mainly by a strong focus on communication at the technical level, and using all the tools we have available for that this can scale quite well.

set_based_design_image_1

Innovation

The whole thing starts getting interesting when we combine this sort of set-up with the ideas from set-based thinking to allow multiple teams to provide separate implementations of a given feature that we’d like to have. One could be working on a minimum viable version of the feature, ensuring we have a version that we can get in production as quickly as possible. Another team could be working on another version, that provides many more advantages but also has more risk due to unknown technologies, necessary outside contact, etc.

set_based_design_image_2

This parallel view on distributing risk and innovation has many advantages over a more serial approach. It allows for an optimal use of a large development organization, with high priority items not just picked up first, but with multiple paths being worked on simultaneously to limit risk and optimize value delivered.

Again, though, this is only possible if the technical design of the system allows it. To effectively work like this we need loosely coupled systems, and agreed upon APIs. We need feature toggles. We need easy, automated deployment to test the different options separately.

Pushing Innovation Down

But even with all this, we still have an obvious bottleneck in communication between the business and the development teams. We are also limiting the potential contributors to innovation by the top-down structure of product owner filling a product backlog.

Even most agile projects have a fairly linear look on features and priorities. Working from a story map is a good first step in getting away from that. But to really start reaping the benefits of your organisation’s capacity for innovation, one has to take a step back and let go of some control.

The way to do that is by making very clear what the goals for the organisation are, and for larger organisations what the goals for the product/project are. Make those goals measurable, and find a way to measure frequently. Then we can get to the situation below, where teams define their own features, work on them, and verify themselves whether those features indeed support the stated goals. (see also ‘Actionable Metrics at Organisational Scale‘, and ‘On Effect Mapping and Pirate Metrics‘)

set_based_design_image_3This requires, on top of all the technical supporting practices already mentioned, that the knowledge of the business and the contact with the user/customer is embedded within the team. For larger audiences, validation of the hypothesis (that this particular, minimum viable, feature indeed serves the stated goals), will need to be A/B tested. That requires a yet more advanced infrastructural setup.

All this ties together into the type of network organisations that we’ve discussed before. And this requires a lot of technical and business discipline. No one ever said it was going to be easy.

Actionable Metrics at Organizational Scale

I recently chaired a session on ‘Going from company vision to Actionable Metrics‘ at the Stoos Stampede conference in Amsterdam. In that session I tried to show some ideas on making the link from an overall company vision, through different approaches to achieve that vision, to concrete actionable metrics allowing teams within a company to autonomously pursue steps towards making that vision a reality. I’m not sure I succeeded in all of that in the session, so I’m trying again in this post…

Autonomy

A goal of a lean enterprise is to ensure that the people doing the work have all the information, knowledge and skills necessary to make decisions in their day-to-day work. For a lean knowledge organisation that means that people don’t just need to know their own work field well, they need to be able to relate the decisions they make every day to the longer-term goals and vision of the organisation.

Much has been said about supporting high levels of motivation and customer focus within companies. Especially in larger companies this is quite hard to sustain, which is not surprising with works such as Dan Pink’s Drive emphasising the importance of autonomy for the knowledge worker. Ensuring the right information, and a quick feedback loop for knowledge workers is key to motivated, high performing people.

Networks

Such autonomy can’t easily be achieved in a classically structured hierarchical organisation. The siloes inherent in that type of structure are natural barriers. Barriers to the autonomy of action where the distribution of necessary skills and knowledge over separate departments is an impediment to producing work and serving the customer. Barriers as well to the autonomy of reaction where the feedback loop on whether an action was in any way effective in reaching the goals of the organisation or not is too long, or absent.

An organizational structure much more compatible with that goal of autonomy, is that of a network organisation. The basic concept of a network organisation is that of independently working cross-functional teams that gather each other’s support where necessary but generally are able to make their own decisions. Enabling them to make their own decision is the subject of this post. These are the type of organizations that the Stoos Network is considering as the preferred replacement for today’s dysfunctions.

Actionable Metrics

The Lean Startup concept of Actionable Metrics (in order to create Validated Learning) is a great way to give a team the necessary autonomy to work independently towards the right goals. In a startup those metrics can be very directly linked with the goals of the company. In  larger organisations there is need for a clear link between the overall company vision and Actionable Metrics that are usable on the team level.

An actionable metric is one that ties specific and repeatable actions to observed results. — Ash Maurya, http://www.ashmaurya.com/2010/07/3-rules-to-actionable-metrics/

In this post I’ll be using an Effect Map as the method to link the vision to specific metrics, but other methods exist of course. During the session, Catherine Louis mentioned GQM as a method designed determine which metrics to use. This paper gives some more background on GQM. The GQM method seems mostly concerned with determining the right metric for any given goal or problem, and can as such be very useful within the type of context I’m talking about. Another approach at determining the metrics you need is the A3 method.

The nice thing about Effect Maps is that they are very inclusive, and involve different roles and functions in their creation. This fits well with the multi-functional teams in our target organisation. They’re also easy to scale, using a diverge and merge facilitation process, so you can easily work on this with larger groups with full participation.

We’re on a mission from…

The first point of order is determining why we’re here. Not in a metaphysical way. I don’t really have the patience for that. In a ‘What are we trying to do as a company?’ way.  A company vision and mission statements should provide us with a good starting point here. A vision statement could be “A literate future, ” with a mission statement of “More readers, more books.”

This is of course very generic, and a subsequently generated Effect Map could go all over the place:

Effect Map Example

Effect Map Example

One thing we always need to add to the ‘Why?’ part of the Effect Map is a concrete, measurable, goal. This could be, in this case, encourage people to read more books, going from a current estimate of 100 books in a ‘lifetime’ (30 years, apparently, in the poll we got that figure from), to 1000.

Our company could encourage people to read more books in many different ways. The Effect Map shows various directions: working through publishers, changing business models, working with public libraries, promote reading in schools, making books cheaper, working with writers directly instead of through publishers, and some ways of helping people find the right books through technology.

Since we have a technology company, those last seem more relevant to start with. A larger company would probably start exploring some of the other possibilities as well, and perhaps be able to integrate those with the technology work. That could mean incorporating different sales models into the e-reader software. Or creating a 2nd hand e-book market in there. Or something. Plenty of opportunities!

Getting to actionable metrics

How do we go from such a generic goal (people read 10x more books in their lifetime!) to some actionable metrics that can be used by the multi-functional teams that our network organization comprises of? These teams need to be able to use those metrics in their day-to-day decision making. They need to be able to devise experiments, prioritise work, and navigate towards products and solutions without the type of top-down supervision that characterizes the more traditional organization.

First of all you need a baseline. Say we have a product through which people can read books: e-reader software (I told you we were a tech company). From that software we could gather statistics on the number of books people read. To do this well, we’d probably need to track this relative to how long customers have been using our software so we don’t get skewed figures from early enthousiasm (for instance). The term to look for is cohort testing. In our example, it turns out people are buying, on average, one book every three months. To get to the goal of 10x more books, we should then improve this to 3 books a month! This is already a shorter term, and thus more helpful, goal.

Pirates

To get to more useful figures, we need to turn to Dave McClure’s Pirate Metrics. Pirate Metrics are all about the funnel of attracting customer interest, keeping them, and selling to them: Acquisition, Activation, Retention, Referral and Revenue, or AARRR. Just by looking at customers through this lens gives a useful perspective. Our goal is phrased as getting people to read (on average) 10x more books. This could be approached as a matter of increasing retention (more books per customer), but also as on of Acquisition/Activation (getting more customers). That last one only works if we don’t take them away from other sources of books, of course. Can you think of a way to measure that? Certainly combined with increasing retention it would still give a new positive effect.

This would give us two main variables to pay attention to: Retention and Acquisition. We should, as a matter of course, be paying attention to at least the first three (AAR) of these metrics, and most companies will have a natural tendency to also track the last R… But tracking what the result of specific actions are on Retention and Acquisition should have our focus for now.

Pirate Metrics

Pirate Metrics

Splitting Metrics

But wait! In the Effect Map we had come up with two high-level feature ideas that would help us reach our goals: ‘Social Reading’ and ‘Better Book Recommendations’. Should both these ideas work with the same metrics?

Interesting question. On the one hand, I’d expect to be tracking all the pirate metrics in a well established application. But. The whole idea here is that you focus. So while we should keep a global eye on the whole (I’ll get back to that later), the experiments we’re conducting should focus on the change of a particular (set of) variables.

For our examples:

  • Social Reading – This is mostly about having existing readers getting each other interested in other books. That would be Retention. Secondary would be getting new customers in by sharing outside the app, which would be Referral. It’s important to note that distinction, as this has a direct influence on the priority of hypotheses to try.
  • Recommendations – This is also mostly about Retention. Existing readers should be getting more relevant recommendations, and thus but more books. The second level would be Activation. People who visit our shop already, but haven’t bought anything yet, should also be getting better recommendations and thus be prompted to buy.

This is consistent with the way we defined our goals, focusing on existing readers. That means it gives a decided focus to our development work. Phrasing our goals a little differently might increase our attention to new customer acquisition, but we’re not doing that. Consciously diving down into our metrics makes these kind of choices explicit, and that’s A Good Thing.

Absolutely Relative

So how would our teams take these more metrics towards specific hypotheses? First, we’d establish a baseline for retention. That could be

  • When people buy a book, the average time between this purchase and the previous one is 92 days

Then we can start measuring this over time. A nice, always visible chart on a big screen in the development teams’ rooms would be a great idea.

This is a useful metric, as we can measure it day-by-day. It can also be calculated in time based cohorts, as well as feature based cohorts, so we can compare normal changes over time with changes caused directly by our new features.

Hypothetically Speaking…

Ok, now we can get started. “Social Reading” is quite a broad concept. Our imaginative team of developers and product people can brainstorm-up quite a large cloud of ideas that fall within that scope, and they might have a collective gut-feeling on which ones of those would be most effective. They might have used another Effect Mapping exercise to generate ideas, and dot-voted on the most plausible ones. Or not.

The question they should be asking themselves is:

  • What would be the simplest way, costing the least effort, to prove that this idea can indeed prove effective in decreasing the average time between purchases?

If that’s not what they’re asking, they might as well be asking their company if it was feeling lucky, inc.

So for any ideas they generated, they should be thinking about this questions: how can it help  disprove (or prove) that the “Social Reading” idea is plausible?

From the long list (or effect map) of ideas that they generated (sharing quotes, sharing notes, rating books, publishing ‘reading lists’, embedding shared things on blogs, embedding on facebook or twitter, etc.) they pick one item. In this case that item might be a very basic “If a user can easily share he’s reading a book on twitter, this will trigger a shorter time between purchases”.

Now there are some problems with this one. Most important of all is that we don’t limit our audience, so we don’t know if people receiving the tweet will be existing customers. That’s Ok, though. It simply means we’re also testing for referral. Having an ‘internal’ audience might be more effective. But it would probably require a much larger up-front investment to create a communication channel between just our customers, an as such would be a less efficient way to test the hypothesis.

Another problem might be that we’re not helping the customer to share parts of the book, or anything, so the content of the tweet will probably be unspecific. We want more!

Stop!

Hold on. Take it easy. Hold your horses. We were looking for the simplest way to validate our hypothesis. How did we get into a discussion on all the cool features that should be in there? This feature, that feature, estimations (of both effort and expected value), discussions about opinions about hypotheticals…

If you want to know whether some tweets, to an audience that probably includes some existing customers, about a specific book, have some impact on sales then what you should do is write a few tweets. About some books. With an account that’s probably already there, from one of the people in the team. That probably already has other users of the service among its followers.

We all know that this is what should be done, that this is what the whole Lean Startup idea professes: Do The Simplest Thing. But even (or particularly?) in a bigger enterprise we need to put our money where our mouth is. And more importantly, avoid putting too much money where our mouth is and focus on getting that (in)validation of our most important hypothesis.

Giving the reins to the team

Taking those minimal steps is an important part of the overall process. It also seems to be one of the most difficult parts. Like developers needing time and practice to get used to working in the small steps of Test Driven Development. Like the Product Owners needing practice to split their requirements up into small enough chunks to be practical within a short sprint. Doing the absolute minimum work required to invalidate a hypothesis is probably the most difficult skill (or discipline?) to master from the Lean Startup mindset.

You can’t make it work without, though.

Especially in larger organisations where, by simple imperative of the size of the organisation, the involvement in individual projects, products and teams from the people setting the overall direction is much less than in a small startup!

The collaborative construction of Effect Maps ties together our organisation with a common vision and goal. Our carefully crafted and continuously tuned set of actionable metrics give teams clear direction within their level of influence to achieve.

To ensure that the organisational leadership doesn’t need to feel nervous about progress towards their goals, it is crucial that we fail as fast as is possible. And adjust. And try the next idea.

All Together Now

So organizational leadership can comfortably sleep at night in the knowledge that the full intellect and energy of their entire company is being put to work in the pursuit of truth, happiness and organizational goals while continuously self-correcting by the application of validated learning.

What more could they want?

There is one step still missing in this particular example, though. The metrics gathered for the specific experiments provide the very specific data needed for validated learning on the team level. The broader metrics that those are built on are still necessary for the bigger picture.

In our example that means that the targeted cohort testing done in each team is only one slice of the whole. The same (pirate) numbers are being gathered for a much broader cohort over longer periods of time to check whether the organisation as a whole is on the right track. Since that broader cohort would include the entire customer base, it will capture the combined results of all the teams.

Combining Cohorts

Summary

In this article I’ve tried to illustrate, using a simple example, how longer term organizational goals can be made measurable in the short term, and can be used to provide the direction and purpose for teams to work independently and with full autonomy towards a shared organizational purpose.

Can you capture your organization’s vision in goals? What end-result metrics will you introduce? Can you refrain from cost metrics and focus on new value delivery? Go on. Do it.

On Effect Mapping and Pirate Metrics

During the Specification by Example training I talked about recently, Gojko Adzic introduced me to Effect Mapping. He’s writing a more extensive booklet on the subject, of which he’s released a beta here. I think this is an excellent tool for exploring goals, opportunities and possible features. It can be used as a tool to generate a backlog of features, as a way to explore possible business hyptheses, and perhaps even as a light-weight way to do strategic management of a company.

But let’s start with a short description (see Gojko’s site or beta booklet for the longer one) of what effect mapping is.

Effect Mapping basics

The basic structure of an effect map is that of a structured Mind-Map. A mind-map is a somewhat hierarchical way to note down ideas related to a central theme.

A Mind Map

A Mind Map

The effect map is a mind-map with a specific structure. The different levels of the mind-map are based around the answers to four questions:

  • Why? (The Goal)
  • Who? (Who can have a role in reaching that goal; Or preventing it)
  • How? (In what way can they help, business activities)
  • What? (What are the concrete software features to make; Or non-software actions to take)

Additional levels can be specific User Stories, tasks, or actions, but that depends more on how you want to organise your backlog. The important thing is that this provides an uninterrupted flow from high-level goal or vision to concrete work.

In this way, effect maps can provide one of the important missing steps in the Agile software development world: how to determine what features provide value supporting specific business goals.

The goal used in the centre of the effect map is supposed to be a measurable goal: we need to know unambiguously when this goal has been reached! Gojko gives a nice overview of the way this can be done, using a lighter-weight version of the way Tom Gilb prescribes for making goals measurable. This involves the scale (thing we’re measuring), meter (way we’ll measure it), benchmark (current state), constraint (minimum acceptable value, break even point), target (what we want it to be).

Effect mapping is not just about what ends up on the diagram, it’s also the process of generating the map. This is, of course, a collaborative approach. Getting the involved people together, and creating and discussing the goal, and the ways to get there. Important is that both business people with decision power as well as subject matter experts and technical people that can know possible solution directions should be present. And yes, they do have time for this. Using diverge and merge, as discussed in my previous post, can be very useful again. There’s more, but it’s not relevant to the rest of my post, about iterating, prioritising, etc. So just go and read the booklet, already.

Our Effect Map

In his booklet, Gojko also links this process to the Lean Startup process of customer development. I think this is a great combination, but I would like to see some tweaks in the type of measurements we use for goals in that case.

Actionable Metrics for Pirates

In his book The Lean Startup, Eric Ries talks extensively about the importance of Actionable Metrics, as opposed to Vanity MetricsAn actionable metric is one that ties specific and repeatable actions to observed results. A Vanity Metric is usually a more generic metric (such as total number of hits on a website, or total number of customers), that is not tied to specific (let alone repeatable) actions. There can be many reasons why those change, and isolating the reason is one part of making metrics actionable.

Dave McClure user the term ‘Pirate Metrics‘ to talk about the most important metrics he sees for organisations:Pirate Flag

A – Acquisition – User is directed to your site;

A – Activation – User signs up or is otherwise engaged;

R – Retention – User keeps coming back, i.e., is engaged over time;

R – Referral – User invites others;

R – Revenue – User pays or is otherwise monetized;

These are also the familiar ‘funnel’ metrics, and the above link to Ash Maurya’s site has much more background on them. When using these metrics, it is recommended to do Cohort testing, so that you can see the different results for different groups of users (again, see Ash Maurya’s site). Doing this such that the source of the (new) users is trackable allows you to identify the best ways of increasing the number of users, without deluding yourself into extrapolating from great growth figures if they’re the result of a single marketing action.

This is probably the only spot where Gojko’s booklet could use some tuning. The Gilbian metrics he uses for his example are functional metrics (SMART, and all that), but not the type of actionable metrics that would fit into the lean startup mold. In this example, we’re talking about reaching a certain number of player (one million), in a certain space of time (6 months), while keeping costs and retention rates to certain limits. Gojko does of course explain how to do this iteratively, taking a partial goal (less player than in the end-goal) and checking at a defined milestone whether the goal was reached. And because we’ve only done one ‘branch’ of the effect map, we do have a specific action to link to the result.

If we were to re-cast this more along the lines of our pirate metrics, we could rephrase the goal as increasing the Acquisition and Activation rates. To be clear: this is a whole other goal! This goal is about a change that will ensure structural growth in the longer term. The goal on 1M extra users could (perhaps) be reached by increasing marketing spending (note that only  operational costs are taken into account in the original example). In that case, the goal would be reached, but given that there is a typical Retention rate of (for example) 6 months, then there would be an equivocal exodus of users after that time. If the company had reacted based on total number of users, this could lead to incorrect actions such as hiring extra people, etc.

If the CEO comes down with the message ‘We want 1 million users!’, and it turns out he wants that in about 6 months time, we can then say, “Ok, that’s about 5500 new users per day, or
a growth rate of 1.6%” (per day, again). Then we can start creating and testing hypotheses in much smaller steps than those 6 or three months. What’s more, by using these metrics (and goals) it should become possible to use the same metrics further out into the effect map.

And if this is something at a larger scale, teams can take some of the higher level hypotheses / business activities and use their own domain knowledge to devise more experiments to find a way to reach those goals. Since churn is included in the figure, this also allows experiments based on increasing retention rates. And since we’re doing cohort testing on pirate metrics, we can know day-to-day and week-to-week whether what we’re doing has the expected results. This extends the set-based design paradigm used in effect mapping to a broader organisation.

Strategy and organisational structure

So by using effect mapping, we can make the relation between high-level goals, stakeholders and intermediate level goals visible, using a collaborative process involving different parts and levels of the organisation. By using a consistent set of (value / end-result focused) metrics that can be applied in both the short and longer term, throughout the organisation, we can enable all levels of the organisation to apply their knowledge and skills to reach those goals. And thus allows for more self-organisation (and experimentation) at all levels…

I recently came across the system of Hoshin Kanri (via Bob Marshall). This has some remarkable similarities to what I’ve been talking about. It’s also a method to ensure policy/goals can be distributed throughout the organisation, with involvement of multiple levels and stakeholders at each step. I’ve not studied it extensively, but to me it does feel like a strictly hierarchical system, and one that is mostly used in large companies with a fairly slow (yearly and half-yearly) cycle time. It is used by Toyota, and is part of Total Quality Control, which is supposed to be “designed to use the collective thinking power of all employees to make their organization the best in its field.” It’s nice to see that everything has already been thought of, and we’re just repeating the progress of the past:-)

The difference with effect mapping is its light-weight focus, and the ease with which effect mapping can more easily be used in less hierarchical organisations. In fact, I think such a system might well be a prerequisite to the type of organisations we’re talking about in light of the Stoos network: “learning networks of individuals creating value”. My recent proposal for a session at the Stoos Stampede was precisely about finding out how we can link an organisation’s vision/mission to goals specific enough that teams can work towards them independently, but open enough that they would not suffocate and entrap those teams. I think this might be one solution to that problem.

Stoos Stampede