Outside in, whatever’s at the core

I haven’t written anything on here for quite a while. I haven’t been sitting still, though. I’ve gone independent (yes, I’m for hire!) and been working with a few clients, generally having a lot of fun.

I was also lucky enough to be able to function as Chet’s assistent (he doesn’t need one, which was part of the luck:-) while he was giving the CSD course at Qualogy, recently. Always a joy to observe, and some valuable reminders of some basics of TDD!

One of those basics is the switch between design and implementation that you regularly make when test-driving your code. When you write the first test for some functionality, you are writing a test against a non-existing piece of code. You might create an instance of an as-yet non-existing class (Arranging the context of the test), call a non-existent method on that class (Acting on that context), and then calling another non-existing method to verify results (Asserting). Then, to get the test to compile (but still fail), you create those missing elements. All that time, you’re not worrying about implementation, you’re only worrying about design.

Later, when you’re adding a second test, you’ll be using those same elements, but changing the implementation of the class you’ve created. Only when a test needs some new concepts will the design again evolve, but those tests will trigger an empty or trivial implementation for any new elements.

So separation of design and implementation, a good thing. And not just when writing micro-tests to drive low-level design for new, fresh classes. What if you’re dealing with a large, legacy, untested code base? You can use a similar approach to discover your (future…) design.

As described in Michael Feathers‘ great book ‘Working effectively with legacy code‘, when you have such a legacy code base a good first step is to surround it with some end-to-end tests. Those tests are needed to be able to change the code without fear of breaking the existing functionality. But those tests can also be used as a way to discover a target design for the system. A design that is usually not at all clear when you start attacking such a beast.

Feathers_MECH.qxd

So how would we do this in practice. Let’s say we want to use an ATDD/BDD type of definition to describe the expected behavior. That way, we can write them at a high level, and verify them together with end-users of the system so we know we’re testing the right functionality. A tool such as FitNesse or Cucumber can be used to store our test-cases, and the corresponding fixture/stepdefinition/glue-code can be created to implement the test.

At this point, when usually happens is that those step-definitions are implemented using outside interfaces to the system. Often GUI interfaces, using tools such as Selenium or Robot framework. And though new glue-code could be written to run the same tests against a new or refactored implementation, this is a missed opportunity.

If we implement the glue-code against the expected API of the system that is natural to describe the functionality described by our test scenarios, we are discovering the design of the system in the same way we do this at a more granular level using our unit tests.

Creating that design in this way still allows us to use whatever technology that is appropriate for our legacy system to implement the system API. But it also provides us with a target design for the system, which might be called directly from the glue-code at a later stage. It will also guide discovery of all the places where functionality is not contained in the right places and elements in the existing system. And allows a controlled and incremental refactoring of the system into a more maintainable state.

Everybody need somebody

On occasion, I like to listen to podcasts. Some of the most interesting can be those that are from outside of the software industry. This week I was listening to Robb Wolf’s podcast, where he hosted guest David Werner. Robb talks mostly about diet, metabolism and exercise, and this episode was focused on that last one. Both Robb and David are coaches. In the sports sense of the word: they own gyms, and teach people how to exercise both for general health and to improve performance in some sports endeavor.

Listening to people who are experts in their area is always a joy. Because learning by osmosis is fun. Because listening to people talk at a higher level of experience then you can helps you find out what is really important in an area (well, sometimes…). A joy. And, remarkably, it’s also a joy to find how people in completely different lines of work have found ways of working and thinking that so resemble things in my own area of work.

So it was nice to hear David Werner talking extensively about improving in small steps. About the danger (in physical training) of taking too big a step, and having related smaller goals that won’t over-strain you current capacity. And about how often people don’t do this, and try to do pull-ups while they’re not even able to do a proper push-up, damaging their shoulders in the process. The fact that I’m still recovering from my own shoulder injury due to over-straining has only marginal influence on that.

drop down and give my twenty! (well, if you can. Otherwise 3?)
drop down and give me twenty! (well, if you can. Otherwise 3?)

David went on to describe that based on that experience, he was building his new website in the same manner. He even mentioned that there was some Japanese word that is sometimes used for that. Kai-something?

Another piece of cross-industry wisdom is their discussion on how everybody, no matter how experienced, needs a coach. Robb joining David’s training helped him find areas where he could improve his fitness that he hadn’t found himself. I guess that the more of an expert you are in an area, the more expert your coach would need to be, but having an outside view of what your doing is the very best way to get better of what you do.

Everybody needs a coach

As a coach, of consultant, or whatever you want to call it, it’s sometimes hard to get this kind of feedback. That’s why initiatives such as Yves’ Pair Coaching, of one of the Agile Coach camps are very valuable. And why we like to go to all those conferences. But you can find opportunities in your everyday work as well, just by explicitly looking for it.

Agile On The Beach Talk

Ciarán and I had a wonderful time at the Agile on the Beach conference this last week. We did the first full version of our talk: “The ‘Just Do It’ approach to change management”.  I did an earlier version of the talk at the DARE conference in Antwerp earlier this year, but this longer version has gone through quite a few changes in the mean time.

agile-on-the-beach

The conference was set-up very well, and it was great to talk to so many people working on Agile in the UK.

The slides for the talk are up on slideshare:

We got some really nice responses, including:
The next chance to catch us is at the Lean and Kanban Netherlands conferene (“Modern Management Methods: Making Better Decisions”) conference in Maarssen on 7-8 October. We’ll have a new iteration of the talk, of course. Always on the move:-)LKNL-im-a-speaker-badge
UPDATE: The video of the talk was just released, and can be found on the conference website. Our talk can also be viewed directly on YouTube:
 Next year, Agile on the Beach will be on 4-5 September, and you can register your interest.

DevOps and Continuous Delivery

If you want to go fast and have high quality, communication has to be instant, and you need to automate everything. Structure the organisation to make this possible, learn to use the tools to do the automation.

There’s a lot going on about DevOps and Continuous Delivery. Great buzzwords, and actually great concepts. But not altogether new. But for many organisations they’re an introduction to agile concepts, and sometimes that means some of the background that people have when arriving at these things in the natural way, through Agile process improvement, is missing. So what are we talking about?

DevOps: The combination of software developers and infrastructure engineers in the same team with shared responsibility for the delivered software

Continuous Delivery: The practice of being able to deliver software to (production) environments in a completely automated way. With VM technology this includes the roll-out of the environments.

Both of these are simply logical extensions of Agile and Lean software development practices. DevOps is one particular instance of the Agile multi-functional team. Continuous Delivery is the result of Agile’s practice of automating any repeating process, and in particular enabled by automated tests and continuous integration. And both of those underlying practices are the result of optimizing your process to take any delays out of it, a common Lean practice.

In Practice

DevOps is an organisational construct. The responsibility for deployment is integrated in the multi-functional agile team in the same way that requirement analysis, testing and coding were already part of that. This means an extension to the necessary skills in the teams. System Administrator skills, but also a fairly new set of skills for controlling the infrastructure as if it were code with versioning, testing, and continuous integration.

Continuous Delivery is a term for the whole of the process that a DevOps team performs. A Continuous Delivery (CD) process consists of developing software, automating testing, automating deployment, automating infrastructure deployment, and linking those elements so that a pipeline is created that automatically moves developed software through the normal DTAP stages.

So both of these concepts have practices and tools attached, which we’ll discuss in short.

Practices and Tools

DevOps

Let’s start with DevOps. There are many standard practices aimed at integrating skills and improving communication in a team. Agile development teams have been doing this for a while now, using:

  • Co-located team
  • Whole team (all necessary skills are available in the team)
  • Pairing
  • Working in short iterations
  • Shared (code, but also product) ownership
  • (Acceptance) Test Driven Development

DevOps teams need to do the same, including the operations skill set into the team.

One question that often comes up is: “Does the entire team need to suddenly have this skill?”. The answer to that is, of course, “No”. But in the same way that Agile teams have made testing a whole team effort, so operations becomes a whole team effort. The people in the team with deep skills in this area will work together with some of the other team members in the execution of tasks. Those other will learn something about this work, and become able to handle at least the simpler items independently. The ops person can learn how to better structure his scripts, enabling re-use, from developers. Or how to test and monitor the product better from testers.

An important thing to notice is that these tools we use to work well together as a team are cross-enforcing. They enforce each-other’s effectiveness. That means that it’s much harder to learn to be effective as a team if you only adopt one or two of these.

Continuous Delivery

Continuous Delivery is all about decreasing the feedback cycle of software development. And feedback comes from different places. Mostly testing and user feedback. Testing happens at different levels (unit, service, integration, acceptance, …) and on different environments (dev, test, acceptance, production). The main focus for CD is to get the feedback for each of those to come as fast as possible.

To do that, we need to have our tests run at every code-change, on every environment, as reliable and quickly as possible. And to do that, we need to be able to completely control deployment of and to those environments, automatically, and for the full software stack.

And to be able to to that, there are a number of tools available. Some have been around for a long time, while others are relatively new. Most especially the tools that are able to control full (virtualised) environments are still relatively fresh. Some of the testing tooling is not exactly new, but seems still fairly unknown in the industry.

What do we use that for?

You’re already familiar with Continuous Integration, so you know about checking in code to version control, about unit tests, about branching strategies (basically: try not to), about CI servers.

If you have a well constructed CI solution, it will include building the code, running unit tests, creating a deployment package, and deploying to a test environment. The deployment package will be usable on different environments, with configuration provided separately. You might use tools such the cargo plugin for deployment to test (and further?), and keep a versioned history of all your deployment artefacts in a repository.

So what is added to that when we talk about Continuous Delivery? First of all, there’s the process of automated promotion of code to subsequent environments: the deployment pipeline.

pipeline

This involves deciding which tests to run at what stage (based on dependency on environment, and runtime) to optimize a short feedback loop with as detailed a detection of errors as possible. It also requires decisions on which part of the pipeline to run fully automatic, and where to still assume human intervention is necessary.

Another thing that we are newly interested in for the DevOps/CD situation is infrastructure as code. This has been enabled by the emergence of virtualisation, and has become manageable with tools such as Puppet and Chef. These tools make the definition of an environment into code, including hardware specs, OS, installed software, networking, and deployment of our own artefacts. That means that a test environment can be a completely controlled systems, whether it is run on a developer’s laptop, or on a hosted server environment. And that kind of control removes many common error situations from the software delivery equation.

set_based_design_image_3

Scaling Agile with Set-Based Design

I wrote a while back about set-based design, and just recently about a way to frame scaling Agile as a mostly technical consideration. In this post I want to continue with those themes, combining them in a model for scaled agile for production and research.

Scale

In the previous post, we found that we can view scale as a function of the possibilities for functional decomposition, facilitated by a strong focus on communication through code (customer tests, developer tests, simple design, etc.)

This will result in a situation where we have different teams working on different feature-areas of a product. In many cases there will be multiple teams working within one feature area, which can again be facilitated through application of well known design principles, and shared code ownership.

None of this is very new, and can be put squarely in the corner of the Feature Team way of working. It’s distinguished mainly by a strong focus on communication at the technical level, and using all the tools we have available for that this can scale quite well.

set_based_design_image_1

Innovation

The whole thing starts getting interesting when we combine this sort of set-up with the ideas from set-based thinking to allow multiple teams to provide separate implementations of a given feature that we’d like to have. One could be working on a minimum viable version of the feature, ensuring we have a version that we can get in production as quickly as possible. Another team could be working on another version, that provides many more advantages but also has more risk due to unknown technologies, necessary outside contact, etc.

set_based_design_image_2

This parallel view on distributing risk and innovation has many advantages over a more serial approach. It allows for an optimal use of a large development organization, with high priority items not just picked up first, but with multiple paths being worked on simultaneously to limit risk and optimize value delivered.

Again, though, this is only possible if the technical design of the system allows it. To effectively work like this we need loosely coupled systems, and agreed upon APIs. We need feature toggles. We need easy, automated deployment to test the different options separately.

Pushing Innovation Down

But even with all this, we still have an obvious bottleneck in communication between the business and the development teams. We are also limiting the potential contributors to innovation by the top-down structure of product owner filling a product backlog.

Even most agile projects have a fairly linear look on features and priorities. Working from a story map is a good first step in getting away from that. But to really start reaping the benefits of your organisation’s capacity for innovation, one has to take a step back and let go of some control.

The way to do that is by making very clear what the goals for the organisation are, and for larger organisations what the goals for the product/project are. Make those goals measurable, and find a way to measure frequently. Then we can get to the situation below, where teams define their own features, work on them, and verify themselves whether those features indeed support the stated goals. (see also ‘Actionable Metrics at Organisational Scale‘, and ‘On Effect Mapping and Pirate Metrics‘)

set_based_design_image_3This requires, on top of all the technical supporting practices already mentioned, that the knowledge of the business and the contact with the user/customer is embedded within the team. For larger audiences, validation of the hypothesis (that this particular, minimum viable, feature indeed serves the stated goals), will need to be A/B tested. That requires a yet more advanced infrastructural setup.

All this ties together into the type of network organisations that we’ve discussed before. And this requires a lot of technical and business discipline. No one ever said it was going to be easy.

the-princess-bride-inconceivable

The ‘Just Do It’ Approach To Change Management

Last Friday I gave a talk at the Dare 2013 conference in Antwerp. The talk was about the experiences I and my colleague Ciarán ÓNeíll have had in a recent project, in which we found that sometimes a very directive, Just Do It approach will actually be the best way to get people in an agile mindset.

This was surprising to us, to say the least, and so we’ve tried to find some theory supporting our experiences. And though theory is not the focus of this story, it helps if we set the scene by referencing two bits of theory that we think fits our experience.

Just Do It

A long time ago, in a country far away, there was this psychologist called William James, who wrote:

“If you want a quality, act as if you already have it.” – William James (1842-1910)

We often say that if you want to change your behaviour, you need to change your mind, be disciplined, etc. But this principle tells us that it works the other way around as well: if you change your behaviour this can change your thinking. Or mindset, perhaps?

For more about the ‘As If’ Principle, see the book by Richard Wiseman

Another piece of theory that is related is complexity thinking as embodied by the Cynefin framework. Cynefin talks about taking different actions when managing situations that are in different domains: simple, complicated, complex or chaos.

Cynefin Framework

The project

And in chaos, our story begins.

This particular project was a development project for a large insurance company. The project had already been active for over half a year when we joined. It was a bad case of waterfall, with unclear requirements, lots of silo’s, lots of finger pointing and no progress.

The customer got tired of this, and got in a high-powered project manager who was given far reaching mandate to get the project going. (ie. no guarantees, just get *something* done) This guy decided that he’d heard good things about this ‘Agile’ thing, and that it might be appropriate here as a risk-management tool. Which was where we came in.

And this wasn’t the usual agile transition, with its mix of proponents and reluctants, where you coach and teach, but also have to sell the process to large extend.

Here, everyone was external (to the customer), no-one wanted Agile, or had much experience with it, but the customer was demanding it! And taking full responsibility for delivery, switching the project to a time-and-material basis for the external parties.

A whole new ballgame.

Initial actions

We started out by getting everyone involved local. Up to then, people from four different vendors been in different locations, in different countries even. Roughly 60 people in all, we all worked from the office in Amsterdam. Most of these people had never met or even spoken!

We started with implementing a fairly standard Scrum process.

Step one was requiring multi-functional teams, mixing the vendors. This was tolerated. Mostly, I think, because people thought they could ignore it. Then we explained the other requirements. One week sprints, small stories (<2 / 3 days), grooming, planning, demo, retro. These things were all, in turn, declared completely impossible and certainly in our circumstances unworkable. But the customer demanded it, so they tried. And at the end of the first week, we had our first (weak) demo.

So, we started with basic Scrum. The difference was in the way this was sold to the teams. Or wasn’t.

That is not to say that we didn’t explain the reasons behind the way of working, or had discussions about its merit. It’s just that in the end, there was no option of not doing it.

And… It worked!

The big surprise to us was how well this worked. People adjusted quickly, got to work, and started delivering working software almost immediately. Every new practice we introduced, starting with testing within the sprint, met with some resistance, and within 4 to 6 weeks was considered normal.

After a while we noticed that our retrospectives changed from simply complaining about the process to open discussion about impediments and valuable input for improvements generated by our teams.

And that’s what we do all this for, right? The continuous improvement mindset? Scrum, after all, is supposed to surface the real problems.

Well. It sure did.

Automated testing

One of those problems was one which you will be familiar with. If you’ve been delivering software weekly for a while, testing manually won’t keep up. And so we got more and more quality issues.

We had been expecting this, and we had our answer ready. And since we’d had great success so far in our top-down approach, we didn’t hesitate much, and we started asking for automated testing.

Adoption

Resistance here was very high. Much more so than for other changes. Impossible! But we’d heard all those arguments before, and why would this situation be any different? We set down the rules: every story is tested, tests are automated, all this happens within the sprint.

the-princess-bride-inconceivable

And sure enough, after a couple of sprints, we started seeing automated tests in the sprint, and a hit in velocity recovered to almost the level we had had before.

See. It’s Simple! Just F-ing Do It!

Limitations

Then after another 3-4 sprints, it all fell apart.

Tests were failing frequently, were only built against the UI, had lots of technical shortcomings. And tests were built within the team, but still in isolation: a ‘test automation’ person built them, and even those were decidedly unconvinced they were doing the right thing.

In the end, it took us another 6 months to dig our way out of this hole. This took much coaching, getting extra expertise in, pairing, teaching. Only then did we arrive at the stop-the-line mindset about our tests that we needed.

Even with all of that going on, though we were actually delivering working software.

And we were doing that, much quicker than expected. After the initial delays in the project, the customer hadn’t expected to start using the system until… well, about now, I think. But instead we had a (very) minimal, but viable product in time for calculating the 2012 year-end figures. And while we were at it, since we could roll-out new environments at a whim (well… almost:-) due to our efforts in the area of Continuous Delivery, we could also do a re-calculation of the 2011 figures.

These new calculations enabled the company to free a lot of money, so business wise there’s no doubt this was the right thing to do.

But it also meant that, suddenly, we were in production, and we weren’t really prepared to deliver support for that. Well, we really weren’t prepared!

Kanban

And that brings us to one of the most invasive changes we did during the project. After about 5 months, we moved away from Scrum and switched to Kanban.

Just Do It

At that time I was the scrum master of one of the teams, the one doing all the operations work. And our changes in priority were coming very fast, with many requests for support of production. In our retros, the team were stating that they were at the same time feeling that nothing was getting done (our velocity was 0), and they felt stressed (overtime was happening). Not a good combination. This went on for a few sprints, and then we declared Kanban.

That’s not the way one usually introduces Kanban. Which is carefully, evolutionary, keeping everyone involved, not changing the process but just visualising it. You guys know how that’s supposed to be done right?

This was more along the lines: “Hey, if you can’t keep priorities stable for a week, we can’t plan. So we won’t.”

Of course, we did a little more than that. We carefully looked at the type of issues we had, and the people available to work on them. We based some initial WIP limits on that, as well as a number of classes of service. And we put in some very basic explicit policies. No interruptions, except in case of expedite items. If we start something, we finish it. No breaking of WIP limits. And no days longer than 8 hours.

Adoption

That brought a lot of rest to the team. And immediately showed better production. It also made the work being done much more transparent for the PO.

It worked well enough, that another team that was also experiencing issues with the planning horizon also opted to ‘go Kanban’. Later the rest of the teams followed, including the PO team.

Limitations

That is not to say there was no resistance to this change. The Product Owners in particular felt uncomfortable with it for quite some time. The teams also raised issues. All that generated many of those nice incremental, evolutionary changes. And still does. The mindset of changing your process to improve things has really taken root.

The most remarkable thing, though, about all that initial resistance was the direction. It was all about moving back to the familiar safety of… Scrum!

Wrap-up

I’d like to tell you more but this post is getting long enough already. I don’t have time to talk about our adventures with going from many POs to one, introducing Specification by Example, moving to feature teams, or our kanban ready board.

I do feel I need to leave you with some comforting words, though. Because parts of this story go against the normal grain of Agile values.

Directive leadership, instead of Servant Leadership? Top-Down change, instead of bottom-up support? Certainly more of a dose of Theory X than I can normally stomach!

And to see all of that work, and work quite well, is a little disconcerting. Yes, Cynefin says that decisive action is appropriate in some domains, but not quite in the same way.

And overcoming the familiar ‘That won’t work in our situation’ resistance by making people try it is certainly satisfying, but we’ve also seen that fail quite disastrously where deep skills are required. That needs guidance: Still no silver bullets.

Enlightened Despotism is a perhaps dangerous tool. But what if it is the tool that instills the habits of Agile thinking? The tool that forcibly shakes people out of their old habits? That makes the despot obsolete?

Practice can lead to mindset. The trick is in where to guide closely, and when to let go.

Conway’s Organizational Structure Heuristic

organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations. — Melvin Conway

We often run into examples of Conway’s Law in organizations where silo-ed departments prompt architectural choices that are not supportive of good software design. The multi-functional nature of Agile teams is one way to prevent this from happening. But why not turn that around? If we know that organizational structure influences our software design, shouldn’t we let the rules of good software design inform our organizational structure?

Good software design leads to High Cohesion and Loose Coupling. What happens when we apply those principles to organizational design, and in particular to software teams? High Cohesion is about grouping together the parts that work on the same functionality, or need frequent communication. Multi-functional teams do just that, of course, so there we have an easy way of achieving effective organizational design.

Loose Coupling can be a little more involved to map. One way to look at that is that when communication between different teams is necessary, it should be along well-defined rules. That could be rules described in programming APIs when talking about development. Or rules as in well defined pull situations in a Lean process. Or simply the definition of specific tasks for which an organization has central staff available, such as pay-roll, HR, etc.

In general, though, the principles make it very simple: make sure all relevant decisions in day-to-day work can be made in the team context, with only in exceptional situations a need to find information or authorization in the rest of the organization.

scrum_tardis

Scaling Agile?

There’s a lot of discussion in the Agile community on the matter of scaling agile. Should we all adopt Dean Leffingwell’s Scaled Agile Framework? Do the Spotify tribe/squad thing? Or just roll our own? Or is Ron Jeffries’ intuition right, and do the terms scaling and agile simply not mix?

Ron’s stance seems to be that many of Agile’s principles simply don’t apply at scale. Or apply in the same way, so why act differently at scale? That might be true, but might also be a little too abstract to be of much use to most people running into questions when they start working with more than one team on a codebase.

Time and relative dimension in space

When Ron and Chet came around to our office last week, Chet mentioned that he was playing around with the analogy of coordination in time (as opposed to cross-team) when thinking about scaling. This immediately brought things into a new perspective for me, and I thought I’d share that here.

If we have a single team that will be working on a product/project for five years, how are they going to ensure that the team working on it now communicates what is important to the team that is working on it three, four or five years from now?

Now that is a question we can easily understand. We know what it takes to write software that is maintainable, changeable, self-documenting. We know how to write requirements that become executable, living documentation. We know how to write tests that run through continuous integration. We even know how to write deployment manifests that control the whole production environment to give us continuous deployment.

So why would this be any different when instead of one team working five years on the same product, we have five teams working for one year?

This break in this post is intentionally left blank to allow you to think that over.

Simple Design

scrum_tardis
Scrum really is bigger on the inside!

This way of looking at the problem simplifies the matter considerably, doesn’t it? I have found repeatedly that there are more technical problems in scaling (and agile adoption in general) than organizational ones. Of course, very often the technical problems are caused by the organizational ones, but putting them central to the question of scaling might actually help re-frame the discussions on a management level in a very positive way.

But getting back to the question: what would be the difference?

Let’s imagine a well constructed Agile project. We have an inception where the purpose of the product is clearly communicated by the customer/PO. We sketch a rough idea of architecture and features together. We make sure we understand enough of the most important features to split off a minimum viable version of it, perhaps using a story map. We start the first sprint with a walking skeleton of the product. We build up the product by starting with the minimal versions of a couple of features. We continue working on the different features later, extending them to more luxurious versions based on customer preference.

As long as the product is still fairly well contained, this would be exactly the same when we are with a few teams. We’d have come to a general agreement on design early on, and would talk when a larger change comes up. Continuous integration will take care of much of the lower level coordination, with our customer tests and unit testing providing context.

One area does become more explicit: dependencies. Where the single team would automatically handle dependencies in time by influencing prioritization, the multiple teams would need to have a commonly agreed (and preferably commonly built) interface in existence before they could be working on some features in parallel. This isn’t really different from the single-team version above, where the walking skeleton/minimal viable feature version would also happen before further work. But it would be felt as something needing some special attention, and cooperation between teams.

If we put these technical considerations central, that resolves a number of issues in scaling.  It could also allow for a much better risk / profit trade-offs by integrating this approach with a set-based approach to projects. But I’ll leave that for a future post.

Show me the money!

Change processes are difficult to do. Most of them fail to have the intended results. The reasons for that can be many, of course. There is one, though, that is of particular interest to me today. 

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
― Upton Sinclair

There are many change processes, re-organisations, agile adoptions, etc. that don’t aim for changes in the reward systems. This is only natural: changing what people earn is a very sensitive subject! It almost guarantees a good amount of resistance.

But if you’re in the middle of an agile transition, and line and/or project managers are being rewarded with bonuses for completed projects? Or for reducing ‘idle’ time? And if your change requires longer-term customer relationship, but your sales team is rewarded for new business?

Sometimes it’s enough to simply follow the money.

Show-me-the-money

From the Lean Startup movement, we learn that it makes sense to choose your business metrics wisely. The same is true for the metrics you base your reward system on. But can we use those same Lean Startup principles to alleviate the risk of paying people to resist the change you need?

In a change process, employees are one of your stakeholders. Your customers. Your customers have needs and expectations that you will have to satisfy to allow the acceptance of your change process to grow. So how can you turn this around, and use the rewards program to generate support for your change?

I see two possible situations. One is that the rewards program is mostly in alignment with company goals. This happens mostly when there is some kind of profit-sharing system happening, with the distribution key fairly well fixed and independent of individual contributions. In this case, as long as the metrics for the change process are linked to the main company goals, it’s easy to also relate them to the rewards program. There can still be a challenge connecting those measures to day-to-day activities, but that is shared with our second scenario.

And that second situation is more difficult. If bonuses are awarded based on lower level metrics, then even when the overall health of the company improves with your change process, it can still be detrimental to individual rewards. In those situations it is absolutely crucial to adopt the rewards system in lockstep with the change program.

Stop paying people to resist the change you need

An example:

In a software development environment, say you have a bonus system based on project completion and you go into an agile transformation. As part of the transformation, it becomes less important to deliver projects as a whole. According to the existing definitions, fewer projects are ‘completed’ even if more new features reach your end-users. You will have a situation where your project or line managers are incentivized to push for work that now has lower priority for the company.

So make changes. If at all possible, relate the reward system directly to company results. But don’t wait for your year-end or quarterly figures. figure out how much each user or purchase or site-visit contributes to the overall revenue. Find out how much they cost. And use those kind of figures to calculate a rough indication of bonus/profit sharing figures (x% of revenue goes to the profit-sharing pool?). Those figures can be tracked day-by-day. Or week-by-week. And they can be used to change behaviour, and align interests.

Track rewards related metrics on a day-to-day basis, so they are an incentive to change behaviour

Of course, in most change processes, there will be a transitional period, with some large projects running. You can, for a short while, have these two types of incentives along side each other. Probably. You don’t let your employees take a financial hit because you are in the middle of a change. That could require a little investment, such as a guarantee for the short-term that a certain minimum is kept. But make sure to change as quickly as possible to the fast feedback figures.

Spikes, they’re sharp

One of the concepts that came from XP is the Spike. Especially in teams new to agile, there can be confusion on what a Spike is, and how to deal with them.

The best definition of a Spike I’ve found is this one:

“Spike” is an Extreme Programming term meaning “experiment”. We use the word because we think of a spike as a quick, almost brute-force experiment aimed at learning just one thing. Think of driving a big nail through a board.
— Extreme Programming Adventures in C# – Ron Jeffries

Let’s break this down.

Experiment

A Spike is an “… experiment aimed at learning just one thing”. That means that a Spike always starts with a question to be answered. An Hypothesis. At the end of the Spike there is an answer to that question. The hypothesis has been proved, or disproved. And that proof takes the form of a piece of software. A document does not prove anything.

Quick

A Spike is quick. A Spike is usually time-boxed, to as short a period of time we think is feasible to answer our question. That period of time should normally not exceed a day or so.

Brute-force

A Spike will not generate any production code. A spike can be a design session in code, just enough to prove that the design will work. It can be a technology investigation, just enough to see if that library will do the trick. All code from the Spike will be thrown away. It’s about learning, not production.

Rare

I know, that wasn’t in Ron’s definition. Just an additional remark. Spikes are rare. They occur only very infrequently. Once every couple of sprints sounds about right to me. That might decline with the age of the project, as Spikes are particularly useful to remove uncertainty on technical design early in a project.

Spike image

In practice

When an occasion comes up to introduce a Spike into a sprint, you do the following:

  1. Reconsider: Do you really need to investigate, or are you just scared to commit to an actual, value delivering, user story because you don’t know the system very well?
  2. Reconsider again: Do you really need to investigate, or are you just scared to work together with the rest of the team without someone handing you a detailed specification?
  3. Define the question: ‘grooming’ a Spike means that you clearly and unambiguously define the hypothesis that you need to have answered by the spike. This is like the Spike’s Acceptance Criteria. That means clearly defined, and preferably having a boolean (yes / no) answer. Agree what further actions will result from a ‘yes’. And from a ‘no’.
  4. Define the time-box: To answer this question, what size investment are we prepared to do? Keep it short. A day is long. Sometimes long is needed. Sometimes.
  5. Prioritize: The Spike is on your backlog. You prioritize your backlog.
  6. Execute: Code to find your answer. Stop as soon as you have it. Stop if the time-box expires.
  7. Deal with the result: That’s the action agreed upon earlier. A timed-out Spike also is an answer to that question (‘too expensive to find out’), and also has an agreed upon action.

Have you done a Spike recently? How did that turn out? Did you get your answer? Or a time-out?

%d bloggers like this: