The three failures of Continuous Delivery

Everyone seems to want to get on the Continuous Delivery train. Rightfully so, I think. For most, though, it’s not an easy ride. From my work with client and conversations with other coaches there’s a few common barriers to adoption.

In the end, the goal should be to be able to react faster to the market. And, to be honest, to finally be in actual control. But in business terms, it’s about cycle times. That’s what allows you to not just react quickly to market circumstances, but to actively probe markets and test product ideas.

So, as I mentioned, there’s a few common problems companies run into. First, just the basic technical steps to create a fully automated pipeline. Then, getting the tests sorted to a level that gives enough confidence to deploy to production whenever they’ve run. Only when those technical matters have been sorted do we get to the more interesting issues of allowing the business to make use of the possibilities offered by the newfound agility. Those have their own challenges.

Let’s have a look at the the ways these particular subject give teams trouble. In the hope that being forewarned some will be able to avoid them. I’ll go into more detail on how to avoid them in subsequent posts.

Get a pipeline

Now, if you’ve paid any attention to the literature, you know that at the core, CD is all about important things like process and a culture of quality. Which is all true, but that probably won’t help you very much. Most development organisations have spent years wrapping themselves in workarounds and buffers all painstakingly created to prevent detection of their real problems. So taking a relatively small, technical, step in setting up a delivery pipeline at least seems somewhat feasible and will by its nature start showing where some of the real problems lie.

A Delivery Pipeline

From what I’ve seen, just trying to set up that pipeline is trouble enough. That’s why I’ve put it as the first barrier to adoption of CD. It may seem easy, but there turn out to be many basic technical challenges. Most teams go through those same pains, and it’s not really surprising. There’s quite a bit of (often new) knowledge and skills involved. And teams usually have to deal with all kinds of legacy code and infrastructure, which doesn’t make it any easier.

Mostly, what companies find here is that they are missing is skills. And there are a lot of skills involved! A real DevOps approach should include operations knowledge in a team, but even then most of the skills needed to create a modern, fully automated infrastructure are something that takes most organisations a long time to develop.
It’s not that these things are beyond those teams, it’s just that they’ve not had to deal with them before. Sure, it is easy enough to package your application in a docker container and run it locally, but people are discovering it is quite a different thing to build it out further than that.

Testing

Testing is the achilles heel of many development teams. Most agile teams work hard to get and keep their code under test. Many fail. The advantage that Continuous Delivery has is that it sets explicit expectations on quality. There’s really no room to skimp on testing if every push you do should end up on production.
Like was the case for Continuous Integration, testing is what makes a Delivery Pipeline useful. It’s great if you have fully automated deployment, but if you have no way to determine if the code you’re building can be trusted, you’ll still not be in production any sooner.
There’s different ways teams fail with testing. Insufficient unit testing. Too limited protocol and service testing. A reliance on slow and brittle end-to-end testing. Skipping manual / exploratory testing, that may no longer be a gateway before going into production but is still very much necessary.

Business

Organisations that manage to get past the first two hurdles have at their disposal a tool that can bring them unimagined business advantages. But even having come this far, existing silo’s, processes and political positioning prevent organisations from profiting from their newly found technical capabilities.

Symptoms of this can be found in the ignoring, or even complete lack, of market data in deciding on new products and functionality. In continuing a practice of long term planning, without built in checks to see if the intended goals are being achieved. In basing priorities on political influence instead of business goals. And even in a reluctance to release new features to users even once they’re available behind a feature toggle in production.

These issues can be the most difficult to address and need to be picked up at the highest management levels. They are attacked with changes in goal setting, reward systems, and organisational structure.

Interlocking pieces

As with any process, these different elements cannot exist for long without the others to support them. Testing withers if it cannot be run quickly and frequently enough. A delivery pipeline has little value if you have no way to know if you can trust the code that it’s building. And a highly evolved technical team that is not clearly and directly involved with business goals and customers will easily find more fulfilling work elsewhere.

That’s why my advice is to start in this order, picking up the next challenge as soon as there’s clear progress on the previous. You start building technical skills and then use that base as a flywheel to get a change in the rest of the company going.

Don’t Refactor. Rebuild. Kinda.

I recently had the chance to speak at the wonderful Lean Agile Scotland conference. The conference had a very wide range of subjects being discussed on an amazingly high level: complexity theory, lean thinking, agile methods, and even technical practices!

I followed a great presentation by Steve Smith on how the popularity of feature branching strategies make Continuous Integration difficult to impossible. I couldn’t have asked for a better lead in for my own talk.

Which is about giving up and starting over. Kinda.

Learning environments

Why? Because, when you really get down to it, refactoring an old piece of junk, sorry, legacy code, is bloody difficult!

Sure, if you give me a few experienced XP guys, or ‘software craftsmen’, and let us at it, we’ll get it done. But I don’t usually have that luxury. And most organisations don’t.

When you have a team that is new to the agile development practices, like TDD, refactoring, clean code, etc. then learning that stuff in the context of a big ball of mud is really hard.

You see, when people start to learn about something like TDD, they do some exercises, read a book, maybe even get a training. They’ll see this kind of code:

Example code from Kent Beck's book: "Test Drive Developmen: By Example"

Example code from Kent Beck’s book: “Test Drive Development: By Example”

Then they get back to work, and are on their own again, and they’re confronted with something like this:

Code Sample from my post "Code Cleaning: A refactoring example in 50 easy steps"

Code Sample from my post “Code Cleaning: A refactoring example in 50 easy steps”

And then, when they say that TDD doesn’t work, or that agile won’t work in their ‘real world’ situation we say they didn’t try hard enough. In these circumstances it is very hard to succeed. 

So how can we deal with situations like this? As I mentioned above, an influx of experienced developers that know how to get a legacy system under control is wonderful, but not very likely. Developers that haven’t done that sort of thing before really will need time to gain the necessary skills, and that needs to be done in a more controlled, or controllable, environment. Like a new codebase, started from scratch.

Easy now, I understand your reluctance! Throwing away everything you’ve built and starting over is pretty much the reverse of the advice we normally give.

Let me explain using an example.

Contine reading

Extending the Goal in Scrum

In his post “The Goal in Scrum“, Ron Jeffries makes the case for having a proper, higher-level-than-stories, Sprint Goal. As he says:

This is better, because it allows the wisdom and knowledge of the team to be fully exercised, and because it keeps focus on “what” is needed more than on just how it is to be done.

The point is well made, and true. Many Scrum teams would be much better off when adopting this practice. If you haven’t read the article yet, please do so now. It’s short and to the point, I’ll wait right here.

I think there are further steps beyond the point that Ron describes, that a good Agile organisation should aspire to. And that help get closer to the XP idea of an on-site customer.

For an example, let’s take the same team that Ron is talking about, working on some web-shop like domain. I’ll take a point in time a little further out than Ron did. They already learned his lesson, after all. And having done that they have a nice web shop running, with a working checkout flow, and even a wish-list.

The shop has a reasonable number of visitors, and sells enough to keep everyone employed. But though new functionality is built regularly, growth in terms of revenue is very uneven and not clearly linked to the efforts of the development team. This worries the CEO. He even considers whether changes in the team (bigger/smaller?) are necessary. The PO advises a more considered approach. He goes to the team and tells them about the issue:

“It seems our work sometimes helps us make money, but other times has no effect at all!”

The team has a nice, long, retro discussion about this. They remind the PO that they sometimes have raised questions on the practical use of some of the things they were building. He reminds them that those same things sometimes turned out to work well. And sometimes not. They realise they are missing an important feedback cycle.

Step one: Sprint Goal as a Business Test

The team is a very competent XP team, and knows that the best way to develop is to pull your assumptions forward. Test first. And change direction if the results tell you it’s not working. They agree with the PO to take a similar approach to the Sprint Goal: Describe the Goal as a test. Not a Unit Test. Not an Acceptance Test. Maybe a Business Test? One of the members talks about hypotheses but is voted down because the international team knows they’ll fail pronouncing that.

So for the next sprint, the PO and the rest of the team discuss what the Goal should be. The PO tells them that it seems many people put items in their shopping cart, and even go to checkout, but then stop and never go to payment. They agree that the goal should be to find out how to improve the conversion in that part of their sales funnel.

Step two: Information Radiator for Business Goal

The first story they agree on is to create a dashboard for the team to see this particular funnel. Easily done with their existing analytics software, but the team hasn’t been looking at that until now.

Step three: Generate ideas that could influence Business Goal

Overly simplified dashboard

Overly simplified dashboard

Then they think of all the reasons why they think someone would stop at that point. Could it be that the total amount frightens them? Should that be in the short view of the shopping cart on the main page? Is it the account creation that stops people going forward? Or the selection of the payment method? One of the team thinks the absence of PayPal as an option could be the problem. They decide they don’t know. And decide to find out.

Step four: Verify ideas

The other stories they create are small changes. And as part of those stories they encode decisions. Decisions that will result in more stories. Or will result in quickly deleting the just built functionality.

One example is the amount: they make a change in the shopping cart view on the main page that shows the approximate amount. The amount will be calculated client side, not taking into account tax and such which would require much more work. And they build it so that about 20% of their users get this new version while the rest get the old one. And compare the results. They agree up-front that only when this has more than 1% effect in the conversion they will build a more capable version of the feature in the next sprint.

The team member that likes PayPal gets a go too: let’s just put a ‘Pay with PayPal’ button on there, and see how often its pressed. Again shown to only a small subset of users. And again, only if it results into an increase of 1% or higher, they will build the PayPal integration.

Step five: Build feature

Based on the results of their experiments, which were very easy and quick to build, they create further stories for the backlog. Depending on how much time they had to spent, some of those stories could even be added to the current sprint. They’re proven to be supportive of the Goal. But if that doesn’t happen, it could also be fine to plan them in the next sprint, or later. At least the business value of those stories are very well defined.

Report

The PO is exited, but also worried a little. They’ll be building partial solutions. He is used to reporting on completed features to management. He works with the Scrum Master and one of the developers on the type of data they’ll be deciding on and creating a good report on them. Then he goes to discuss those reports with management. Management likes the figures, but would like to add a few forecasts on how these figures influence the revenue figures. That is quite easy to do, using the conversion figures along with average order size, and pretty soon they have a report that everyone is happy with.

The PO now reports on which parts of the sales funnel they’ve worked on, what ideas they testes, which worked and which didn’t, and how they are influencing revenue. Because they employ small experiments, they don’t spend much on the ideas that don’t work. And the report makes very clear that the increases in revenue that occur are significantly less than the stable costs of the team, even if the difference isn’t constant.

TL:DR

Defining your Sprint Goal in measurable business terms (such as Pirate Metrics for a web shop) gives more transparency and closer integration between development teams and their stakeholders.

Agile 2015 Talk: Don’t Refactor. Rebuild. Kinda.

Monday, August 3, I had the opportunity to give a talk at the Agile Alliance’s Agile 2015 conference in Washington, D.C. My first conference in the US, and it was absolutely fantastic to be able to meet so many people I’d only interacted with on mailing lists and twitter. It was also a huge conference, with about 17 concurrent tracks and 2200 participants. I’m sure I’ve missed meeting as many people as I did manage to find in those masses. IMG_20150803_135914

Even with that many tracks, though, there were still plenty of people that showed up for my talk on Monday afternoon. So many that we actually had to turn people away. This is great, I’ve never been a fire hazard before. I was a bit worried beforehand. With my talk dealing with issues of refactoring, rebuild and legacy code,  it was a little unnerving to be programmed against Michael Feathers…

My talk is about how we have so much difficulty teaching well known and proven techniques from XP, such as TDD and ATDD, and some of the evolved ones like Continuous Delivery. And that the reason for that could be that these things are so much more difficult to do when working with legacy systems. Especially if you’re still learning the techniques! At the very least, it’s much more scary.

I then discuss, grounded in the example of a project at the VNU/Pergroep Online Services, how using an architectural approach such as the Strangler Pattern, combined with process rules from XP and Continuous Delivery, can allow even a team new to them to surprisingly quickly adopt these techniques and grow in proficiency along with their new code.

Rebuilding. Kinda.

Slides

The slides of my talk are available on slideshare, included below.

 

I’ll devote a separate post in a few weeks to give the full story I discuss here. In the mean time…

If you missed the talk, perhaps because you happened to be on a different continent, I’ll be reprising it next Wednesday at the ASAS Nights event, in Arnhem. I’d love to see you there!

I'm speaking at ASAS Nights 2 september

Outside in, whatever’s at the core

I haven’t written anything on here for quite a while. I haven’t been sitting still, though. I’ve gone independent (yes, I’m for hire!) and been working with a few clients, generally having a lot of fun.

I was also lucky enough to be able to function as Chet’s assistent (he doesn’t need one, which was part of the luck:-) while he was giving the CSD course at Qualogy, recently. Always a joy to observe, and some valuable reminders of some basics of TDD!

One of those basics is the switch between design and implementation that you regularly make when test-driving your code. When you write the first test for some functionality, you are writing a test against a non-existing piece of code. You might create an instance of an as-yet non-existing class (Arranging the context of the test), call a non-existent method on that class (Acting on that context), and then calling another non-existing method to verify results (Asserting). Then, to get the test to compile (but still fail), you create those missing elements. All that time, you’re not worrying about implementation, you’re only worrying about design.

Later, when you’re adding a second test, you’ll be using those same elements, but changing the implementation of the class you’ve created. Only when a test needs some new concepts will the design again evolve, but those tests will trigger an empty or trivial implementation for any new elements.

So separation of design and implementation, a good thing. And not just when writing micro-tests to drive low-level design for new, fresh classes. What if you’re dealing with a large, legacy, untested code base? You can use a similar approach to discover your (future…) design.

Contine reading

DevOps and Continuous Delivery

If you want to go fast and have high quality, communication has to be instant, and you need to automate everything. Structure the organisation to make this possible, learn to use the tools to do the automation.

There’s a lot going on about DevOps and Continuous Delivery. Great buzzwords, and actually great concepts. But not altogether new. But for many organisations they’re an introduction to agile concepts, and sometimes that means some of the background that people have when arriving at these things in the natural way, through Agile process improvement, is missing. So what are we talking about?

DevOps: The combination of software developers and infrastructure engineers in the same team with shared responsibility for the delivered software

Continuous Delivery: The practice of being able to deliver software to (production) environments in a completely automated way. With VM technology this includes the roll-out of the environments.

Both of these are simply logical extensions of Agile and Lean software development practices. DevOps is one particular instance of the Agile multi-functional team. Continuous Delivery is the result of Agile’s practice of automating any repeating process, and in particular enabled by automated tests and continuous integration. And both of those underlying practices are the result of optimizing your process to take any delays out of it, a common Lean practice.

In Practice

DevOps is an organisational construct. The responsibility for deployment is integrated in the multi-functional agile team in the same way that requirement analysis, testing and coding were already part of that. This means an extension to the necessary skills in the teams. System Administrator skills, but also a fairly new set of skills for controlling the infrastructure as if it were code with versioning, testing, and continuous integration.

Continuous Delivery is a term for the whole of the process that a DevOps team performs. A Continuous Delivery (CD) process consists of developing software, automating testing, automating deployment, automating infrastructure deployment, and linking those elements so that a pipeline is created that automatically moves developed software through the normal DTAP stages.

So both of these concepts have practices and tools attached, which we’ll discuss in short.

Practices and Tools

DevOps

Let’s start with DevOps. There are many standard practices aimed at integrating skills and improving communication in a team. Agile development teams have been doing this for a while now, using:

  • Co-located team
  • Whole team (all necessary skills are available in the team)
  • Pairing
  • Working in short iterations
  • Shared (code, but also product) ownership
  • (Acceptance) Test Driven Development

DevOps teams need to do the same, including the operations skill set into the team.

One question that often comes up is: “Does the entire team need to suddenly have this skill?”. The answer to that is, of course, “No”. But in the same way that Agile teams have made testing a whole team effort, so operations becomes a whole team effort. The people in the team with deep skills in this area will work together with some of the other team members in the execution of tasks. Those other will learn something about this work, and become able to handle at least the simpler items independently. The ops person can learn how to better structure his scripts, enabling re-use, from developers. Or how to test and monitor the product better from testers.

An important thing to notice is that these tools we use to work well together as a team are cross-enforcing. They enforce each-other’s effectiveness. That means that it’s much harder to learn to be effective as a team if you only adopt one or two of these.

Continuous Delivery

Continuous Delivery is all about decreasing the feedback cycle of software development. And feedback comes from different places. Mostly testing and user feedback. Testing happens at different levels (unit, service, integration, acceptance, …) and on different environments (dev, test, acceptance, production). The main focus for CD is to get the feedback for each of those to come as fast as possible.

To do that, we need to have our tests run at every code-change, on every environment, as reliable and quickly as possible. And to do that, we need to be able to completely control deployment of and to those environments, automatically, and for the full software stack.

And to be able to to that, there are a number of tools available. Some have been around for a long time, while others are relatively new. Most especially the tools that are able to control full (virtualised) environments are still relatively fresh. Some of the testing tooling is not exactly new, but seems still fairly unknown in the industry.

What do we use that for?

You’re already familiar with Continuous Integration, so you know about checking in code to version control, about unit tests, about branching strategies (basically: try not to), about CI servers.

If you have a well constructed CI solution, it will include building the code, running unit tests, creating a deployment package, and deploying to a test environment. The deployment package will be usable on different environments, with configuration provided separately. You might use tools such the cargo plugin for deployment to test (and further?), and keep a versioned history of all your deployment artefacts in a repository.

So what is added to that when we talk about Continuous Delivery? First of all, there’s the process of automated promotion of code to subsequent environments: the deployment pipeline.

pipeline

This involves deciding which tests to run at what stage (based on dependency on environment, and runtime) to optimize a short feedback loop with as detailed a detection of errors as possible. It also requires decisions on which part of the pipeline to run fully automatic, and where to still assume human intervention is necessary.

Another thing that we are newly interested in for the DevOps/CD situation is infrastructure as code. This has been enabled by the emergence of virtualisation, and has become manageable with tools such as Puppet and Chef. These tools make the definition of an environment into code, including hardware specs, OS, installed software, networking, and deployment of our own artefacts. That means that a test environment can be a completely controlled systems, whether it is run on a developer’s laptop, or on a hosted server environment. And that kind of control removes many common error situations from the software delivery equation.

Scaling Agile with Set-Based Design

I wrote a while back about set-based design, and just recently about a way to frame scaling Agile as a mostly technical consideration. In this post I want to continue with those themes, combining them in a model for scaled agile for production and research.

Scale

In the previous post, we found that we can view scale as a function of the possibilities for functional decomposition, facilitated by a strong focus on communication through code (customer tests, developer tests, simple design, etc.)

This will result in a situation where we have different teams working on different feature-areas of a product. In many cases there will be multiple teams working within one feature area, which can again be facilitated through application of well known design principles, and shared code ownership.

None of this is very new, and can be put squarely in the corner of the Feature Team way of working. It’s distinguished mainly by a strong focus on communication at the technical level, and using all the tools we have available for that this can scale quite well.

set_based_design_image_1

Innovation

The whole thing starts getting interesting when we combine this sort of set-up with the ideas from set-based thinking to allow multiple teams to provide separate implementations of a given feature that we’d like to have. One could be working on a minimum viable version of the feature, ensuring we have a version that we can get in production as quickly as possible. Another team could be working on another version, that provides many more advantages but also has more risk due to unknown technologies, necessary outside contact, etc.

set_based_design_image_2

This parallel view on distributing risk and innovation has many advantages over a more serial approach. It allows for an optimal use of a large development organization, with high priority items not just picked up first, but with multiple paths being worked on simultaneously to limit risk and optimize value delivered.

Again, though, this is only possible if the technical design of the system allows it. To effectively work like this we need loosely coupled systems, and agreed upon APIs. We need feature toggles. We need easy, automated deployment to test the different options separately.

Pushing Innovation Down

But even with all this, we still have an obvious bottleneck in communication between the business and the development teams. We are also limiting the potential contributors to innovation by the top-down structure of product owner filling a product backlog.

Even most agile projects have a fairly linear look on features and priorities. Working from a story map is a good first step in getting away from that. But to really start reaping the benefits of your organisation’s capacity for innovation, one has to take a step back and let go of some control.

The way to do that is by making very clear what the goals for the organisation are, and for larger organisations what the goals for the product/project are. Make those goals measurable, and find a way to measure frequently. Then we can get to the situation below, where teams define their own features, work on them, and verify themselves whether those features indeed support the stated goals. (see also ‘Actionable Metrics at Organisational Scale‘, and ‘On Effect Mapping and Pirate Metrics‘)

set_based_design_image_3This requires, on top of all the technical supporting practices already mentioned, that the knowledge of the business and the contact with the user/customer is embedded within the team. For larger audiences, validation of the hypothesis (that this particular, minimum viable, feature indeed serves the stated goals), will need to be A/B tested. That requires a yet more advanced infrastructural setup.

All this ties together into the type of network organisations that we’ve discussed before. And this requires a lot of technical and business discipline. No one ever said it was going to be easy.

Scaling Agile?

There’s a lot of discussion in the Agile community on the matter of scaling agile. Should we all adopt Dean Leffingwell’s Scaled Agile Framework? Do the Spotify tribe/squad thing? Or just roll our own? Or is Ron Jeffries’ intuition right, and do the terms scaling and agile simply not mix?

Ron’s stance seems to be that many of Agile’s principles simply don’t apply at scale. Or apply in the same way, so why act differently at scale? That might be true, but might also be a little too abstract to be of much use to most people running into questions when they start working with more than one team on a codebase.

Time and relative dimension in space

When Ron and Chet came around to our office last week, Chet mentioned that he was playing around with the analogy of coordination in time (as opposed to cross-team) when thinking about scaling. This immediately brought things into a new perspective for me, and I thought I’d share that here.

If we have a single team that will be working on a product/project for five years, how are they going to ensure that the team working on it now communicates what is important to the team that is working on it three, four or five years from now?

Now that is a question we can easily understand. We know what it takes to write software that is maintainable, changeable, self-documenting. We know how to write requirements that become executable, living documentation. We know how to write tests that run through continuous integration. We even know how to write deployment manifests that control the whole production environment to give us continuous deployment.

So why would this be any different when instead of one team working five years on the same product, we have five teams working for one year?

This break in this post is intentionally left blank to allow you to think that over.

Simple Design

scrum_tardis

Scrum really is bigger on the inside!

This way of looking at the problem simplifies the matter considerably, doesn’t it? I have found repeatedly that there are more technical problems in scaling (and agile adoption in general) than organizational ones. Of course, very often the technical problems are caused by the organizational ones, but putting them central to the question of scaling might actually help re-frame the discussions on a management level in a very positive way.

But getting back to the question: what would be the difference?

Let’s imagine a well constructed Agile project. We have an inception where the purpose of the product is clearly communicated by the customer/PO. We sketch a rough idea of architecture and features together. We make sure we understand enough of the most important features to split off a minimum viable version of it, perhaps using a story map. We start the first sprint with a walking skeleton of the product. We build up the product by starting with the minimal versions of a couple of features. We continue working on the different features later, extending them to more luxurious versions based on customer preference.

As long as the product is still fairly well contained, this would be exactly the same when we are with a few teams. We’d have come to a general agreement on design early on, and would talk when a larger change comes up. Continuous integration will take care of much of the lower level coordination, with our customer tests and unit testing providing context.

One area does become more explicit: dependencies. Where the single team would automatically handle dependencies in time by influencing prioritization, the multiple teams would need to have a commonly agreed (and preferably commonly built) interface in existence before they could be working on some features in parallel. This isn’t really different from the single-team version above, where the walking skeleton/minimal viable feature version would also happen before further work. But it would be felt as something needing some special attention, and cooperation between teams.

If we put these technical considerations central, that resolves a number of issues in scaling.  It could also allow for a much better risk / profit trade-offs by integrating this approach with a set-based approach to projects. But I’ll leave that for a future post.

Spikes, they’re sharp

One of the concepts that came from XP is the Spike. Especially in teams new to agile, there can be confusion on what a Spike is, and how to deal with them.

The best definition of a Spike I’ve found is this one:

“Spike” is an Extreme Programming term meaning “experiment”. We use the word because we think of a spike as a quick, almost brute-force experiment aimed at learning just one thing. Think of driving a big nail through a board.
— Extreme Programming Adventures in C# – Ron Jeffries

Let’s break this down.

Experiment

A Spike is an “… experiment aimed at learning just one thing”. That means that a Spike always starts with a question to be answered. An Hypothesis. At the end of the Spike there is an answer to that question. The hypothesis has been proved, or disproved. And that proof takes the form of a piece of software. A document does not prove anything.

Quick

A Spike is quick. A Spike is usually time-boxed, to as short a period of time we think is feasible to answer our question. That period of time should normally not exceed a day or so.

Brute-force

A Spike will not generate any production code. A spike can be a design session in code, just enough to prove that the design will work. It can be a technology investigation, just enough to see if that library will do the trick. All code from the Spike will be thrown away. It’s about learning, not production.

Rare

I know, that wasn’t in Ron’s definition. Just an additional remark. Spikes are rare. They occur only very infrequently. Once every couple of sprints sounds about right to me. That might decline with the age of the project, as Spikes are particularly useful to remove uncertainty on technical design early in a project.

Spike image

In practice

When an occasion comes up to introduce a Spike into a sprint, you do the following:

  1. Reconsider: Do you really need to investigate, or are you just scared to commit to an actual, value delivering, user story because you don’t know the system very well?
  2. Reconsider again: Do you really need to investigate, or are you just scared to work together with the rest of the team without someone handing you a detailed specification?
  3. Define the question: ‘grooming’ a Spike means that you clearly and unambiguously define the hypothesis that you need to have answered by the spike. This is like the Spike’s Acceptance Criteria. That means clearly defined, and preferably having a boolean (yes / no) answer. Agree what further actions will result from a ‘yes’. And from a ‘no’.
  4. Define the time-box: To answer this question, what size investment are we prepared to do? Keep it short. A day is long. Sometimes long is needed. Sometimes.
  5. Prioritize: The Spike is on your backlog. You prioritize your backlog.
  6. Execute: Code to find your answer. Stop as soon as you have it. Stop if the time-box expires.
  7. Deal with the result: That’s the action agreed upon earlier. A timed-out Spike also is an answer to that question (‘too expensive to find out’), and also has an agreed upon action.

Have you done a Spike recently? How did that turn out? Did you get your answer? Or a time-out?

Turning it up to 11

Turning It Up To 11It’s odd how I’ve been unable to be very consistent in my subject-matter for this blog. I tend to hop around, going from very technical subject to very organisational ones. Some might see this as lacking focus. Maybe that’s true. I’ve never been able to separate execution from organisation and vision very well. To me they seem intrinsically linked. It’s comforting to me that even such luminaries as Kent Beck also seem to see things in this light.

If I look at my bifurcated (tri-? n-?) interests, I see a striking resemblance in the states of technological, managerial and commercial maturity in the world. In all of these areas, the state of affairs is abysmal. In all three areas, we seem to have recognised that this is the case. In all three, though, most people performing those roles are so used to the current state that only rarely do they see that a different approach could bring improvement. Could turn their work ‘up to 11’. There are some differences, though.

Technology

On the technology side, we’ve pretty much identified what works, and what doesn’t. Basically, XP got things right. Others before that also hit the right spot, but we know a mature team sticking to XP practices will not mess things up beyond salvation. If we compare that approach to what one finds in the run-of-the-mill waterfall situation, the differences are so great that there is truly no comparison. There are other questions still at least partially open, but most of those are concerned with scale, organisation, and finding out what should be built. And thus belong in the other categories. The main challenge is one of education. And, granted, a bit of proselytising.

Commercial

More commercial questions are less clear-cut, at least for me. In my work I’ve very rarely seen commercial, product development and marketing decisions taken with anything resembling a structured approach of any kind of rigour. A business case, if one is available at al is often only superficial, and almost never comes with any defined metrics and decision moments. The Lean Start-up movement is the only place I’ve seen that is trying to improve that. Taking this approach out of the start-up and into all the product development and marketing departments in the world is going to take a while, but it will happen. If only because companies capable of doing that will completely out-perform the ones that don’t.

I don’t think the case here is as clear cut as on the technology side, but we have a start. The principles of the Lean Start-up are based on the same ideas as Agile development: know what you want the result to be (validated learning) and iterate using short feedback loops. What to do, exactly, in those feedback loops is known for some types of learning, in some situations, but we’re still working on expanding our knowledge and skills in this area.

Management

As the solutions for the commercial and technical sides of things are rooted in experimentation and short feedback cycles, one might assume that the same would be true for the management side of things. And it’s true that those techniques have value in management in many situations. Many of the ideas on management are based on feedback cycles, Lean/Deming’s PDCA is one, for instance, but Cynefin‘s way of dealing with systems in the complex area is another. But we do seem to have many different ideas about how management should be done, how organisations should be structured and what gives people the best environment to work in.

One place where some of these ideas have gotten together is the Stoos Network. It’s interesting because of the different backgrounds of the people involved: Agile, Beyond Budgeting, Radical Management, Industry Leaders. Their initial gettogether this year resulted in a shared vision, with again an emphasis on learning.

“Organizations can become learning networks of individuals creating value, and the role of leaders should include the stewardship of the living rather than the management of the machine.” — Stoos Communique

This clearly expresses some of the shared values of the Stoos people, but still leaves quite a lot to the imagination. The people and ideas involved are interesting enough that I’ve volunteered to help organise one of the follow-up meetings,  the ‘Stoos Stampede’, which takes place in Amsterdam, 6 and 7 July.

Next to Stoos, as I said before, there are many ideas on how to change management. Lean has had an impact, but though the Toyota Way certainly does talk about people and how to support them in an organisation, this is not the prime focus of most Lean implementations. CALM has started talking about combining Complexity, Agile and Lean ideas, but so far has also not posted any results.  We’re still a bit lost at sea, here.

So what would we need from a new management philosophy?

  • We’d need to know how to structure an organisation. Stoos clearly think the current semi-hierarchical default is not workable for the future, or at the very least severely suboptimal. But what do ‘learning networks’ look like? And how do we grow them?
  • We’d need to know how to provide the organisation with a purpose. A Mission, a Vision, a Goal. Whatever you want to call it. Most organisations do have some sort of mission statement, but it is usually so far removed from the everyday practice of everyone working within the organisation that it might as well be absent.
  • We’d need to know how to connect that purpose to the rest of the organisation. How do we link the work of everyone in the organisation to its stated purpose? If the mission is specific this should be possible. But if we connect the work too tightly, it could be stifling.
  • We’d need to know how to connect the organisation with its customers, its suppliers, its partners. This would be different out of necessity, as the structure of the organisation itself is different. It would also be different out of philosophy, as those relations take on different meaning is the goals of the organisation outside of the monetary rise in importance.
  • We’d need to know how to align such organisations with the demands of the outside marketplace and governance. If the organisation is more oriented towards longer term viability and purposeful behaviour, this might have a good long term effect on profitability, but will certainly in the short term have a different financial behaviour. And budgeting and bookkeeping are areas that need very specific attention with an eye on the external rules these subjects need to comply with.

But apart from what new management would do to the idea of an organisation, there are also questions related more to the question of how to get there from here. Why would current managers want to change their organisations? Why would they want to change so drastically? There are plenty of reasons, but would they be convincing to the current CxO? What would they need to learn to be able to execute on such a vision? Will everyone enjoy working in these kinds of more empowering organisations, or will some people prefer something more hierarchical?

All of these things I want to know. Some of them we’ll discuss during the Stoos Stampede (propose a subject to discuss!), but personally I think we’re still at the very earliest stages of this particular change. In the mean time, we do have a few good examples, and some patterns that seem to work, and I’m going to try and get a few more organisations turned up to 11.