Scaling: Local vs Full Vertical scaling

It’s funny, isn’t it? Everybody is still talking about ‘scaling agile’. A whole industry has been created on the premise that large companies need process structures to help them manage pushing very large projects through huge sets of development teams.

Luckily, the DevOps movement (and the continuous delivery movement, I’m not sure they’re really separate) happened early into that process, so in most of these large scale processes there’s at the very least lip service to the idea that quality needs to be high, and delivery needs to be automated, even if they don’t aim for continuous.

Unfortunately, most of the work on the other side of the workflow was not included. Even though a number of initiatives have started in the last five years to bring product, marketing and UX people more into the fold to really include the customer into the process, this is still a very rare occurrence.

This isn’t all that surprising, I suppose. The longer-term planning and significant coordination overhead is familiar and comforting. And the large organisations that start using LeSS, SAFe, etc., will actually be better off than they were before. They’ll deliver more predictably, they’ll get into production quicker and they will feel more in control. Not bad, actually.

But even though their products will be in the hands of customers somewhat faster, the feedback loops from their customers will still be too slow. And the people that should be the link to the customers, the people in Marketing, Sales and Product Management, are way too far from the action. Too far to get fast feedback from the customer directly, but also too far from development to start using all the possibilities of technology to get more information on that customer and their preferences.

We scale our development organizations but ignore the capacity of its customers to make use of it

We scale our development organizations but ignore the capacity of its customers to make use of it

The solution is, predictably, to bring those roles into the fold and create a combined team where marketeers, product manager, software developers, ux specialist, testers and operations people work together to deliver on business goals. We have already seen the game changing effects that occur when software developers, testers and operations truly combine their knowledge and efforts. This next step will have more impact, simply because the knowledge that is being combined in the team is much broader, and more directly linked to the customer.

There are new issues to resolve when we do this. Coordination on a product level needs to be done in a different way. And coordination on the level of the different areas of expertise also needs to be done in a different way. Much more thought needs to be put into matters of vision and strategy, and how those have to be communicated to the teams that are formed. Into how this translates into rewarding people. But we’ve solved those kinds of issues before, and we can tackle them again.

Stop trying to scale agile. Scale your organisation.

The three failures of Continuous Delivery

Everyone seems to want to get on the Continuous Delivery train. Rightfully so, I think. For most, though, it’s not an easy ride. From my work with client and conversations with other coaches there’s a few common barriers to adoption.

In the end, the goal should be to be able to react faster to the market. And, to be honest, to finally be in actual control. But in business terms, it’s about cycle times. That’s what allows you to not just react quickly to market circumstances, but to actively probe markets and test product ideas.

So, as I mentioned, there’s a few common problems companies run into. First, just the basic technical steps to create a fully automated pipeline. Then, getting the tests sorted to a level that gives enough confidence to deploy to production whenever they’ve run. Only when those technical matters have been sorted do we get to the more interesting issues of allowing the business to make use of the possibilities offered by the newfound agility. Those have their own challenges.

Let’s have a look at the the ways these particular subject give teams trouble. In the hope that being forewarned some will be able to avoid them. I’ll go into more detail on how to avoid them in subsequent posts.

Get a pipeline

Now, if you’ve paid any attention to the literature, you know that at the core, CD is all about important things like process and a culture of quality. Which is all true, but that probably won’t help you very much. Most development organisations have spent years wrapping themselves in workarounds and buffers all painstakingly created to prevent detection of their real problems. So taking a relatively small, technical, step in setting up a delivery pipeline at least seems somewhat feasible and will by its nature start showing where some of the real problems lie.

A Delivery Pipeline

From what I’ve seen, just trying to set up that pipeline is trouble enough. That’s why I’ve put it as the first barrier to adoption of CD. It may seem easy, but there turn out to be many basic technical challenges. Most teams go through those same pains, and it’s not really surprising. There’s quite a bit of (often new) knowledge and skills involved. And teams usually have to deal with all kinds of legacy code and infrastructure, which doesn’t make it any easier.

Mostly, what companies find here is that they are missing is skills. And there are a lot of skills involved! A real DevOps approach should include operations knowledge in a team, but even then most of the skills needed to create a modern, fully automated infrastructure are something that takes most organisations a long time to develop.
It’s not that these things are beyond those teams, it’s just that they’ve not had to deal with them before. Sure, it is easy enough to package your application in a docker container and run it locally, but people are discovering it is quite a different thing to build it out further than that.


Testing is the achilles heel of many development teams. Most agile teams work hard to get and keep their code under test. Many fail. The advantage that Continuous Delivery has is that it sets explicit expectations on quality. There’s really no room to skimp on testing if every push you do should end up on production.
Like was the case for Continuous Integration, testing is what makes a Delivery Pipeline useful. It’s great if you have fully automated deployment, but if you have no way to determine if the code you’re building can be trusted, you’ll still not be in production any sooner.
There’s different ways teams fail with testing. Insufficient unit testing. Too limited protocol and service testing. A reliance on slow and brittle end-to-end testing. Skipping manual / exploratory testing, that may no longer be a gateway before going into production but is still very much necessary.


Organisations that manage to get past the first two hurdles have at their disposal a tool that can bring them unimagined business advantages. But even having come this far, existing silo’s, processes and political positioning prevent organisations from profiting from their newly found technical capabilities.

Symptoms of this can be found in the ignoring, or even complete lack, of market data in deciding on new products and functionality. In continuing a practice of long term planning, without built in checks to see if the intended goals are being achieved. In basing priorities on political influence instead of business goals. And even in a reluctance to release new features to users even once they’re available behind a feature toggle in production.

These issues can be the most difficult to address and need to be picked up at the highest management levels. They are attacked with changes in goal setting, reward systems, and organisational structure.

Interlocking pieces

As with any process, these different elements cannot exist for long without the others to support them. Testing withers if it cannot be run quickly and frequently enough. A delivery pipeline has little value if you have no way to know if you can trust the code that it’s building. And a highly evolved technical team that is not clearly and directly involved with business goals and customers will easily find more fulfilling work elsewhere.

That’s why my advice is to start in this order, picking up the next challenge as soon as there’s clear progress on the previous. You start building technical skills and then use that base as a flywheel to get a change in the rest of the company going.

Top Gear: A New Refactoring Kata

For the last five or six years, I’ve been using coding exercises during job interviews. After talking a little with a candidate I open my laptop, call up an editor, and we sit together to do some coding.

My favourite exercise for this is a refactoring kata that I came up with. I’ve always found it more interesting how people deal with bad code they encounter than any small amount of code that can be written in this kind of short period.

The form of the kata is very much inspired by the ‘Gilded Rose’ kata, but it’s intentionally smaller so that it’s possible to get to a point where tests can be written and the code refactored in a period of about an hour, hour and a half.

The code is supposed to be the code of a automatic transmission. Someone has built it, but it was probably (hopefully!) never released. You are asked to make a few improvements so that the gear box can be made more energy efficient in the future. This is the description:

The code that we need to work in looks like this:

I’ve made Java, PHP and ruby versions available in my github repository:

If you add a language, let me know!

Don’t Refactor. Rebuild. Kinda.

I recently had the chance to speak at the wonderful Lean Agile Scotland conference. The conference had a very wide range of subjects being discussed on an amazingly high level: complexity theory, lean thinking, agile methods, and even technical practices!

I followed a great presentation by Steve Smith on how the popularity of feature branching strategies make Continuous Integration difficult to impossible. I couldn’t have asked for a better lead in for my own talk.

Which is about giving up and starting over. Kinda.

Learning environments

Why? Because, when you really get down to it, refactoring an old piece of junk, sorry, legacy code, is bloody difficult!

Sure, if you give me a few experienced XP guys, or ‘software craftsmen’, and let us at it, we’ll get it done. But I don’t usually have that luxury. And most organisations don’t.

When you have a team that is new to the agile development practices, like TDD, refactoring, clean code, etc. then learning that stuff in the context of a big ball of mud is really hard.

You see, when people start to learn about something like TDD, they do some exercises, read a book, maybe even get a training. They’ll see this kind of code:

Example code from Kent Beck's book: "Test Drive Developmen: By Example"

Example code from Kent Beck’s book: “Test Drive Development: By Example”

Then they get back to work, and are on their own again, and they’re confronted with something like this:

Code Sample from my post "Code Cleaning: A refactoring example in 50 easy steps"

Code Sample from my post “Code Cleaning: A refactoring example in 50 easy steps”

And then, when they say that TDD doesn’t work, or that agile won’t work in their ‘real world’ situation we say they didn’t try hard enough. In these circumstances it is very hard to succeed. 

So how can we deal with situations like this? As I mentioned above, an influx of experienced developers that know how to get a legacy system under control is wonderful, but not very likely. Developers that haven’t done that sort of thing before really will need time to gain the necessary skills, and that needs to be done in a more controlled, or controllable, environment. Like a new codebase, started from scratch.

Easy now, I understand your reluctance! Throwing away everything you’ve built and starting over is pretty much the reverse of the advice we normally give.

Let me explain using an example.

Contine reading

Extending the Goal in Scrum

In his post “The Goal in Scrum“, Ron Jeffries makes the case for having a proper, higher-level-than-stories, Sprint Goal. As he says:

This is better, because it allows the wisdom and knowledge of the team to be fully exercised, and because it keeps focus on “what” is needed more than on just how it is to be done.

The point is well made, and true. Many Scrum teams would be much better off when adopting this practice. If you haven’t read the article yet, please do so now. It’s short and to the point, I’ll wait right here.

I think there are further steps beyond the point that Ron describes, that a good Agile organisation should aspire to. And that help get closer to the XP idea of an on-site customer.

For an example, let’s take the same team that Ron is talking about, working on some web-shop like domain. I’ll take a point in time a little further out than Ron did. They already learned his lesson, after all. And having done that they have a nice web shop running, with a working checkout flow, and even a wish-list.

The shop has a reasonable number of visitors, and sells enough to keep everyone employed. But though new functionality is built regularly, growth in terms of revenue is very uneven and not clearly linked to the efforts of the development team. This worries the CEO. He even considers whether changes in the team (bigger/smaller?) are necessary. The PO advises a more considered approach. He goes to the team and tells them about the issue:

“It seems our work sometimes helps us make money, but other times has no effect at all!”

The team has a nice, long, retro discussion about this. They remind the PO that they sometimes have raised questions on the practical use of some of the things they were building. He reminds them that those same things sometimes turned out to work well. And sometimes not. They realise they are missing an important feedback cycle.

Step one: Sprint Goal as a Business Test

The team is a very competent XP team, and knows that the best way to develop is to pull your assumptions forward. Test first. And change direction if the results tell you it’s not working. They agree with the PO to take a similar approach to the Sprint Goal: Describe the Goal as a test. Not a Unit Test. Not an Acceptance Test. Maybe a Business Test? One of the members talks about hypotheses but is voted down because the international team knows they’ll fail pronouncing that.

So for the next sprint, the PO and the rest of the team discuss what the Goal should be. The PO tells them that it seems many people put items in their shopping cart, and even go to checkout, but then stop and never go to payment. They agree that the goal should be to find out how to improve the conversion in that part of their sales funnel.

Step two: Information Radiator for Business Goal

The first story they agree on is to create a dashboard for the team to see this particular funnel. Easily done with their existing analytics software, but the team hasn’t been looking at that until now.

Step three: Generate ideas that could influence Business Goal

Overly simplified dashboard

Overly simplified dashboard

Then they think of all the reasons why they think someone would stop at that point. Could it be that the total amount frightens them? Should that be in the short view of the shopping cart on the main page? Is it the account creation that stops people going forward? Or the selection of the payment method? One of the team thinks the absence of PayPal as an option could be the problem. They decide they don’t know. And decide to find out.

Step four: Verify ideas

The other stories they create are small changes. And as part of those stories they encode decisions. Decisions that will result in more stories. Or will result in quickly deleting the just built functionality.

One example is the amount: they make a change in the shopping cart view on the main page that shows the approximate amount. The amount will be calculated client side, not taking into account tax and such which would require much more work. And they build it so that about 20% of their users get this new version while the rest get the old one. And compare the results. They agree up-front that only when this has more than 1% effect in the conversion they will build a more capable version of the feature in the next sprint.

The team member that likes PayPal gets a go too: let’s just put a ‘Pay with PayPal’ button on there, and see how often its pressed. Again shown to only a small subset of users. And again, only if it results into an increase of 1% or higher, they will build the PayPal integration.

Step five: Build feature

Based on the results of their experiments, which were very easy and quick to build, they create further stories for the backlog. Depending on how much time they had to spent, some of those stories could even be added to the current sprint. They’re proven to be supportive of the Goal. But if that doesn’t happen, it could also be fine to plan them in the next sprint, or later. At least the business value of those stories are very well defined.


The PO is exited, but also worried a little. They’ll be building partial solutions. He is used to reporting on completed features to management. He works with the Scrum Master and one of the developers on the type of data they’ll be deciding on and creating a good report on them. Then he goes to discuss those reports with management. Management likes the figures, but would like to add a few forecasts on how these figures influence the revenue figures. That is quite easy to do, using the conversion figures along with average order size, and pretty soon they have a report that everyone is happy with.

The PO now reports on which parts of the sales funnel they’ve worked on, what ideas they testes, which worked and which didn’t, and how they are influencing revenue. Because they employ small experiments, they don’t spend much on the ideas that don’t work. And the report makes very clear that the increases in revenue that occur are significantly less than the stable costs of the team, even if the difference isn’t constant.


Defining your Sprint Goal in measurable business terms (such as Pirate Metrics for a web shop) gives more transparency and closer integration between development teams and their stakeholders.

Agile 2015 Talk: Don’t Refactor. Rebuild. Kinda.

Monday, August 3, I had the opportunity to give a talk at the Agile Alliance’s Agile 2015 conference in Washington, D.C. My first conference in the US, and it was absolutely fantastic to be able to meet so many people I’d only interacted with on mailing lists and twitter. It was also a huge conference, with about 17 concurrent tracks and 2200 participants. I’m sure I’ve missed meeting as many people as I did manage to find in those masses. IMG_20150803_135914

Even with that many tracks, though, there were still plenty of people that showed up for my talk on Monday afternoon. So many that we actually had to turn people away. This is great, I’ve never been a fire hazard before. I was a bit worried beforehand. With my talk dealing with issues of refactoring, rebuild and legacy code,  it was a little unnerving to be programmed against Michael Feathers…

My talk is about how we have so much difficulty teaching well known and proven techniques from XP, such as TDD and ATDD, and some of the evolved ones like Continuous Delivery. And that the reason for that could be that these things are so much more difficult to do when working with legacy systems. Especially if you’re still learning the techniques! At the very least, it’s much more scary.

I then discuss, grounded in the example of a project at the VNU/Pergroep Online Services, how using an architectural approach such as the Strangler Pattern, combined with process rules from XP and Continuous Delivery, can allow even a team new to them to surprisingly quickly adopt these techniques and grow in proficiency along with their new code.

Rebuilding. Kinda.


The slides of my talk are available on slideshare, included below.


I’ll devote a separate post in a few weeks to give the full story I discuss here. In the mean time…

If you missed the talk, perhaps because you happened to be on a different continent, I’ll be reprising it next Wednesday at the ASAS Nights event, in Arnhem. I’d love to see you there!

I'm speaking at ASAS Nights 2 september

From Here to Continuous Delivery

Situation Normal

There’s a clear pattern for software development. A pattern of lost opportunity.

In most, if not all, places where I’m called in the base question deals with the inability to deliver. Management sees that the plans they have are simply not going to be realised.

Business opportunities are lost waiting. Waiting for the next available spot in the product roadmap. Waiting for the development team to finish ‘stabilizing’ the system. Waiting for lengthy ‘refactoring’ phase to complete. Waiting for new servers to be delivered (in only six weeks!). Waiting for a PMO organisation to complete project initiation and relative priority. Waiting for development to complete coding. Waiting for a testing phase to complete. Waiting for management to analyse long lists of known issues and risks so they can decide whether a release is possible.

Anyway, there’s waiting involved.

As with any interesting problem, this one doesn’t have a single identifiable cause. The marketing department will blame the development team for being too slow. The development team will blame the marketing team for not knowing what they want. The development manager will blame his team for being slow and writing buggy code and all his stakeholders for not being realistic. Upper management will blame the marketing and development managers for being slow and not delivering.

They are, all of them, right.

And you can’t point a finger at one root cause. Yes, development messed up and wrote crappy, unmaintainable code. Yes, the business focused too much on short term gains and put too much pressure on development to deliver early. Yes, management should have focused on mission and strategy so the rest of the company could have managed scope. Yes, all were too hasty hiring new people when the pressure was on, and too much incompetence entered the company.

I’m willing to bet you’ve heard most of those complaints, made a few of them, and been the subject of others.

How do we get out of this vicious circle?

Step one: Fix execution

Stop. You’re trying to do too many things at once. The first thing that needs to be done is to get your technical house in order. As long as you can’t deliver a working, tested, system at the drop of a hat, you’ll always be too slow.

So now, immediately, start changing your technical practices to support better quality. Deploy automatically, Test automatically, test everything, and test in all manner of ways. Change your architecture to support quick change and better practices. And do all that while still delivering value.

That sounds difficult. It is. But it’s possible. I’ve done it. Others have.

You will slow down a little, initially. But you can use architectural changes, such as a Strangler Pattern or Branch by Abstraction, to quickly start over without throwing all your existing systems away.

And focus on quality. This is hard. Management needs to be extremely explicit in this. Technical teams are used to a focus on progress and speed, and will automatically revert to those and subvert the quality of your system in any case of perceived pressure.

Focus on quality

Add all the elements that give you more control over your systems. That means fully automated deployment, include the automated testing that you need to be confident enough to have every push of a developer going to production. Then make sure that actually happens.

Introduce feature toggles, so that your decision to supply a new feature to end-users becomes exactly that: a decision. And not related to your release cycle.

Measure everything.

Measure the results of new functionality on your business. Whether it’s in use of features for an internal system, or all your Pirate Metrics for your product website.

Step two: Fix alignment

Now that you’re able to deliver, it’s time to start making use of the opportunities that gives you.

You already know, now, how to measure the effects new features have on the use of your product. For some, that is already a direct link to the money being made by the product. For the funnel on a product website or web-shop, we can calculate the revenue increase (or decrease) from a change. Other types of applications need a little more work and imagination, but we can certainly get to some measure of value.

If you can’t measure the effects of your work, you can be sure you are not doing the right things.

But that is all still a bottom-up approach. Effective for short term goals, but potentially dangerous if the metrics we use aren’t aligned with longer term business goals.


If you have your mission and vision defined for you company, and there’s a strategy that you expect will bring you there, you should now spend some time to hammering out a small set of actionable metrics that we can use to prioritize our opportunities on a day-to-day basis. The post linked to above shows a way to determine that, based on Gojko Adzic’s excellent Impact Mapping.

You pick a very few metrics. In fact, you should aim for that One Metric That Matters. This is your compass, steering the whole of the company. Be careful that you don’t have other metrics hidden that undermine this, for instance in a target/bonus system.


This OMTM should be permanently visible for everyone. It should be continuously, and automatically measured and updated. And it should be directly coupled to the various day-to-day activities of everyone in your company.

On the level of product development, all your priorities should be determined by the impact on your OMTM.

For most companies, this level of focus and clarity of purpose is far off. It requires clear vision and leadership. And will transform your organisation.

When are you getting started?

XP2015 Workshop: Continuous Delivery using Docker and Jenkins Job Builder


On 25 May, I had the opportunity to give a workshop at the XP 2015 conference in Helsinki on using Jenkins Job Builder to set-up a delivery pipeline to build and deploy Docker images. The full source for the workshop can be found on my github account: This post takes you through the full workshop.

The workshop slides can be found on slideshare:

Contine reading

Outside in, whatever’s at the core

I haven’t written anything on here for quite a while. I haven’t been sitting still, though. I’ve gone independent (yes, I’m for hire!) and been working with a few clients, generally having a lot of fun.

I was also lucky enough to be able to function as Chet’s assistent (he doesn’t need one, which was part of the luck:-) while he was giving the CSD course at Qualogy, recently. Always a joy to observe, and some valuable reminders of some basics of TDD!

One of those basics is the switch between design and implementation that you regularly make when test-driving your code. When you write the first test for some functionality, you are writing a test against a non-existing piece of code. You might create an instance of an as-yet non-existing class (Arranging the context of the test), call a non-existent method on that class (Acting on that context), and then calling another non-existing method to verify results (Asserting). Then, to get the test to compile (but still fail), you create those missing elements. All that time, you’re not worrying about implementation, you’re only worrying about design.

Later, when you’re adding a second test, you’ll be using those same elements, but changing the implementation of the class you’ve created. Only when a test needs some new concepts will the design again evolve, but those tests will trigger an empty or trivial implementation for any new elements.

So separation of design and implementation, a good thing. And not just when writing micro-tests to drive low-level design for new, fresh classes. What if you’re dealing with a large, legacy, untested code base? You can use a similar approach to discover your (future…) design.

Contine reading

Everybody need somebody

On occasion, I like to listen to podcasts. Some of the most interesting can be those that are from outside of the software industry. This week I was listening to Robb Wolf’s podcast, where he hosted guest David Werner. Robb talks mostly about diet, metabolism and exercise, and this episode was focused on that last one. Both Robb and David are coaches. In the sports sense of the word: they own gyms, and teach people how to exercise both for general health and to improve performance in some sports endeavor.

Listening to people who are experts in their area is always a joy. Because learning by osmosis is fun. Because listening to people talk at a higher level of experience then you can helps you find out what is really important in an area (well, sometimes…). A joy. And, remarkably, it’s also a joy to find how people in completely different lines of work have found ways of working and thinking that so resemble things in my own area of work.

So it was nice to hear David Werner talking extensively about improving in small steps. About the danger (in physical training) of taking too big a step, and having related smaller goals that won’t over-strain you current capacity. And about how often people don’t do this, and try to do pull-ups while they’re not even able to do a proper push-up, damaging their shoulders in the process. The fact that I’m still recovering from my own shoulder injury due to over-straining has only marginal influence on that.

drop down and give my twenty! (well, if you can. Otherwise 3?)

drop down and give me twenty! (well, if you can. Otherwise 3?)

David went on to describe that based on that experience, he was building his new website in the same manner. He even mentioned that there was some Japanese word that is sometimes used for that. Kai-something?

Another piece of cross-industry wisdom is their discussion on how everybody, no matter how experienced, needs a coach. Robb joining David’s training helped him find areas where he could improve his fitness that he hadn’t found himself. I guess that the more of an expert you are in an area, the more expert your coach would need to be, but having an outside view of what your doing is the very best way to get better of what you do.

Everybody needs a coach

As a coach, of consultant, or whatever you want to call it, it’s sometimes hard to get this kind of feedback. That’s why initiatives such as Yves’ Pair Coaching, of one of the Agile Coach camps are very valuable. And why we like to go to all those conferences. But you can find opportunities in your everyday work as well, just by explicitly looking for it.