On Discipline, Feedback and Management

Change is hard. If we know that about 80% of organisational change programs fail, then it’s easy to appreciate just how hard. Why is that? And, more importantly, what can we do to make it easier?

Recently, I saw a tweet come by from Alan Shalloway. He wrote that, back in the days, people were saying: Waterfall (though they didn’t call it that, back then, I think) is fine, people are just not doing it right. All we need is apply enough discipline, and a little common sense, and it will work perfectly! I think Alan was commenting on an often heard sound in the Agile community about failing Scrum adoptions: You’re just not doing it right! You need to have more discipline!

This is, actually, a valid comment. In fact, both views are valid. On the one hand, just saying that people are not doing it right is not very helpful. Saying you need to be more disciplined is certainly not helpful. (Just look at the success rate of weight loss programs (or abstinence programs against teen pregnancies).  Change is hard, because it requires discipline. Any process requires discipline. The best way to ensure a process is successfully adopted is to make sure the process supports discipline. This is, again, hard.

This post talks about discipling. About how it can be supported by your process. About how it’s often not supported for managers in Agile organisations, and then of course how to ensure that management does get that kind of support in their work.

In Support Of Discipline

Take a look at Scrum, throw in XP for good measure, and let’s have a look at what kind of discipline we need to have, and how the process does (or does not!) support that discipline.

Agile Feedback Loops

Agile Feedback Loops

The eXtreme Programming practices often generate a lot of resistence. Programmers were, and are, very hesitant in trying them out, and some require quite a bit of practice to do well. Still, most of these practices have gained quite a wide acceptance. Not having unit-testing in place is certainly frowned upon nowadays. It may not be done in every team, but at least they usually feel some embarrassment about that. Lack of Continuous Integration is now an indication that you’re not taking things seriously.

Working structurally using TDD, and Pair Programming, have seen slower adoption.

Extreme Programming Practices

Extreme Programming Practices from XProgramming.com

If we look at some practices from Scrum, we can see a similar distribution of things that are popularly adopted, and some things that are… less accepted.  The Daily Stand-Up, for instance can usually be seen in use, even in teams just getting started with Scrum. Often, so it the Scrum Board. The Planning Meeting comes next, but the Demo/Review and certainly the Retrospective, are much less popular.

Why?

All of these practices require discipline, but some require more discipline than others. What makes something require less discipline?

  • It’s easy!
  • Quick Feedback: It obviously and quickly shows it’s worth the effort
  • Shared Responsibility: The responsibility of being disciplined is shared by a larger group

Let’s see how this works for some of the practices we mentioned above. The Daily Stand-Up, for instance, scores high on all three items. It’s pretty easy to do, you do it together with the whole team, and the increased communication is usually immediately obvious and useful. Same for the Scrum Board.

The Planning Meeting also scores on all three, but scores lower for the first two. It’s not all that easy, as the meetings often are quite long, especially at the start. And though it’s obvious during the planning meeting that there is a lot of value in the increased communication and clarity of purpose, the full effect only becomes apparent during the course of the sprint.

The Demo is also not all that easy, and the full effect of it can take multiple sprints to become apparent. Though quick feedback on the current sprint will help the end-result, and the additional communication with stakeholders will benefit the team in the longer run, these are mostly advantages over the longer term. To exacerbate this effect, responsibility for the demo is often pushed to one member of the team (often the scrum master), which can make the rest of the team give it less attention than is optimal.

Retrospectives are, of course, the epitome of feedback. Or they should be. Often, though, this is one of the less successful practices in teams new to Scrum. The reasons for that are surprisingly unsurprising: there is often no follow-up on items brought forward in the retrospective (feedback on the feedback!), solving issues is not taken up as a responsibility for the whole team, and quite a few issues found are actually hard to fix!

The benefits of Continuous Integration are usually quite quickly visible to a development team. Often, they’re visible even before the CI is in use, as many teams will be suffering from many ‘integration issues’ that come out late in the process. It’s not all that hard to set-up, and though one person can set it up, ‘not breaking the build’ is certainly shared by the whole team.

Unit testing can be very hard to get started with, but again the advantages *if* you use it are immediately apparent, and provide value for the whole team. Refactoring. Well, anyone who has refactored a bit of ugly code, can attest how nice it feels to clean things up. In fact, in the case of refactoring the problem is often ensuring that teams don’t drop everything just to go and refactor everything… Still, additional feedback mechanisms, like code statistics using Sonar or similar tools, can help in the adoption of code cleaning.

Pairing shows quick feedback, and is shared by at least one other person, but it’s often very hard, and has to deal with other things: management misconceptions. We’ll talk about that a little later. TDD‘s advantages are more subtle than just testing at all. So the benefit is less obvious and quick. It’s also a lot harder, and has no group support. These practices do reinforce each other, testing, TDD, and refactoring are easier to do if done together, pairing.

So we can see at least some correlation between adoption and the way a practice supports discipline. This makes a lot of sense: The easier something is to do, the more obvious its benefits, and the more shared its burdens by a group, the better is supports its users, and the more it is adopted.

Feedback within the team

One large part of this is: feedback rules. This is not a surprise for most Agile practitioners, as it’s one of the bases of Agile processes. But it is good to always keep in mind: if you want to achieve change, focus on supporting it with some form of feedback. One form of feedback that is used e.g. Scrum and Kanban, is the visualisation of the work. Use of a task board, or a Kanban board, especially a real, physical one, has a remarkable effect on the way people do their work. It’s still surprising to me how far the effects of this simple device go in changing behaviour in a team.

Seeing how feedback and shared responsibility help in adoption of practices within the team, we could look at the various team practices and find ways to increase adoption by increasing the level of feedback, or sharing the responsibility.

The Daily Stand-Up can be improved by emphasising shared responsibility: rotating the chore of updating the burn-down, ensuring that there’s not a reporting-to-scrum-master feel by passing around a token.

The Planning Meeting could be made easier by using something different from Planning Poker if many stories need to be estimated, or this estimation could be done in separate ‘Grooming’ sessions. Feedback could be earlier by ensuring Acceptance Criteria are defined during the planning meeting (or before), so we get feedback for every story as soon as it gets picked up. Or we can stop putting hourly estimates on tasks to make the meeting go by quicker.

Retrospectives could be improved by creating a Big Visual Improvement backlog, and sticking it to the wall in the team room. And by taking the top item(s) from that backlog into the next sprint as work for the whole team to do. If it’s a backlog, we might as well start splitting those improvements up into smaller steps, to see if we can get results sooner.

All familiar advice for anyone that’s been working Agile! But how about feedback on the Agile development process as it is experienced by management?

Agile management practices

Probably the most frequently sited reason for failure of Agile initiatives, is lack of support from management. This means that an Agile process requires changes in behaviour of management. Since we’ve just seen that such changes in behaviour require discipline, we should have a look at how management is supported in that changed behaviour by feedback and sharing of responsibility.

First though, it might be good to inventory what the recommended practices for Agile Managers are. As we’ve seen, Scrum and XP provide enough guidance for the work within the team. What behaviour outside the team should they encourage? There’s already a lot that’s been written about this subject. This article by Lyssa Adkins and Michael Spayd , for instance, gives a high-level overview of responsibilities for a manager in an Agile environment. And Jurgen Apello has written a great book about the subject. For the purposes of this article, I’ll just pick three practices that are fairly concrete, and that I find particularly useful.

Focus on quality: As was also determined to be the subject requiring the most attention at the 2011 reunion of the writing of the Agile Manifesto, technical excellence is a requirement for any team that want to be Agile (or just be delivering software…) for the long term. Any manager that is dealing with an Agile team should keep stressing the importance of quality in all its forms above speed. If you go for quality first, speed will follow. And the inverse is also true: if you don’t keep quality high, slowing down until nothing can get done is assured.

Appreciate mistakes: Agile doesn’t work without feedback, but feedback is pretty useless without experimentation. If you want to improve, you need to try new things all the time. A culture that makes a big issue of mistakes, and focuses on blame will smother an Agile approach very quickly.

Fix impediments: The best way to make a team feel their own (and their works)  importance is to take anything they consider getting in the way of that very seriously. Making the prioritised list of impediments that you as a manager are working on visible, and showing (and regularly reporting) on their progress is a great way of doing that.

Note that I’ve not talked about stakeholder management, portfolio management, delivering projects to planning, or negotiating with customers. These are important, but are more in the area of the Product Owner. The same principles should apply there, though.

Let’s see how these things rate on ease, speed of feedback, and shared responsibility.

Focus on Quality. Though stressing the importance of quality to a team is not all that difficult, I’ve noticed that it can be quite difficult to get the team to accept that message. Sticking to it, and acting accordingly in the presence of outside pressures can be hard to do. Feedback will not be quick. Though there can be improvements noticeable within the team, from the outside a manager will have to wait on feedback from other parties (integration testing, customer support, customers directly, etc.) If the Agile transition is a broad organisational initiative, then the manager can find support and shared responsibility from his peers. If that’s not the case, the pressure from those peers could be in quite a different direction…

Appreciate mistakes. Again, this practice is one in which gratification is delayed. Some experiments will succeed, some will fail. The feedback from the successful ones will have to be enough to sustain the effort. The same remarks as above can be given with regards to the peer support. Support from within a team, though less helpful than from peers, can be a positive influence.

Fix impediments. The type of impediments that end on a manager’s desk are not usually the easy-to-fix ones. Still, the gratification of removing problems, make this a practice well supported by  quick feedback. And there is usually both gratitude from the team and respect for action from peers.

We can see that these management practices are much less supported by group responsibility and quick feedback loops. This is one of the reasons why managers often are the place where Agile transitions run into problems. Not because of lack of will (we all lack sufficient willpower:-), but because any change requires discipline, and discipline needs to be supported by the process

Management feedback

If we see that some important practices for managers are not sufficiently supported by our process, then the obvious question is going to be: how do we create that support?

We’ve identified two crucial aspects of supporting the discipline of change: early feedback, and shared responsibility. Both of those are not natural fits in the world of management. Managers usually do not work as part of a closely knit team. They may be part of a management team, but the frequency and intensity of communication is mostly too low to have a big impact. Managers are also expected to think of the longer term. And they should! But this does make any change more difficult to sustain, since the feedback on whether the change is successful is too far off.

By the way, if that feedback is that slow, the risk is also that change that is not  successful is kept for too long. That might be even worse…

When we talk about scaling Agile, we very quickly start talking about things such as the (much debated) Scrum of Scrums, ‘higher level’ or ‘integration’ stand-up meetings, Communities of Practice, etc. Those are good ideas, but are mostly seen as instruments to scale scrum to larger projects, and align multiple teams to project goals (and each other).  These practices help keep transparency over larger groups, but also through hierarchical lines. They help alignment between teams, by keeping other teams up-to-date of progress and issues, and help arrange communication on specialist areas of interest.

What I’m discussing in this post seems to indicate that those kind of practices are not just important in the scenario of a large project scaled over multiple teams, but are just as important for team working separately and their surrounding context.

So what can we recommend as ideas to help managers get enough feedback and support to sustain an Agile change initiative?

Work in a team. Managers need the support and encouragement (and social pressure…) that comes of working in a team context as much as anyone. This can partly be achieved by working more closely together within a management team. I’ve had some success with a whole C-level management team working as a Scrum team, for instance.

Be close to the work. At the same time, managers should be more directly following the work on the development teams. Note that I said ‘following’. We do not want to endanger the teams self-organization. I think a manager should be welcome at any team stand-up, and should also be able to speak at one. But I do recognise that there are many situations where this has led to problems, such as undue pressure being put on a team. A Scrum of Scrums, of multi-level stand-up approach can be very effective here. Even if there’s only one team, having someone from the team talk to management one level up daily can be very effective.

Visualise management tasks. Managers can not only show support for the transition in this way, they can immediately profit from it themselves. Being very visible with an impediment backlog is a big help, both to get the impediments fixed, and showing that they’re not forgotten. Starting to use a task board (or Kanban, for the more advanced situations) for the management team work is highly effective. And if A3 sheets are being put on the wall to do analysis of issues and improvement experiments, you’re living the Agile dream…

Visualise management metrics. Metrics are always a tricky subject. They can be very useful, are necessary, but can also tempt you to manage them directly (Goodhart’s Law). Still, some metrics are important to managers, and those should be made important to the teams they manage. Visualising is a good instrument to help with that. For some ideas read Esther Derby’s post on useful metrics for Agile teams. Another aspect of management, employee satisfaction, could be continuously visualised with Henrik Kniberg’s Happiness Matrix, or Jim Benson’s refinement of that.

Scrum Gathering London 2011 – day 3

The third day of the London Scrum Gathering (day 1, day 2) was reserved for an Open Space session led by Rachel Davies and a closing keynote by James Grenning.

We started with the Open Space. I’d done Open Spaces before, but this one was definitely on a much larger scale than what I’d seen before. After the introduction, everyone that had a subject for a session wrote their subject down, and got in line to announce the session (and have it scheduled). With so many attendents, you can imagine that there would be many sessions. Indeed, there were so many sessions that extra spaces had to be added to handle all of them. The subject for the Open Space was to be the Scum Alliance tag-line: “Changing the world of work”.

I initially intended to go the a session about Agile and Embedded, as the organiser mentioned that if there wouldn’t be enough people to talk about the embedded angle, he was OK with widening the subject to ‘difficult technical circumstances’. I haven’t done much real embedded work, but was interested in the broader subject. It turned out, though, that there were plenty of interested parties into the real deal, so the talk quickly started drifting to FPGAs and other esoterica and I used the law of two feet to find a different talk.

My second choice for that first period was a session about getting involvement of higher management. This session, proposed by Joe Justice and Peter Stevens (people with overlapping subjects were merging their sessions), turned out to be  very interesting and useful. The group shared experiences of both successfully (and less successfully) engaging higher management (CxOs, VPs, etc.) into an Agile Change process. Peter has since posted a nice summary of this session on his blog.

My own session was about applying Agile and Lean-Startup ideas to the context of setting up a consultancy and/or training business. If we’re really talking about ‘transforming the world of work’, then we should start with our own work. My intention was to discuss how things like transparency, early feedback, working iteratively and incrementally could be applied for an Agile Coach’s work. My colleague and I have been working to try and approach our work in this fashion, and are starting to get the hang of this whole ‘fail early’ thing. We’ve also been changing our approach based on feedback of customers, and more importantly, not-customers. During the session we talked a little about this, but we also quickly branched off into some related subjects. Some explanation of Lean-Startup ideas was needed, as not everyone had heard of that. We didn’t get far on any discussion on using some of the customer/product development ideas from that side of things, though.

Some discussion on contracts, and how those can fit with an agile approach. Most coaches are working on a time-and-material basis, it seems. We have a ‘Money for nothing and your changes for free’ type contract (see http://jeffsutherland.com/Agile2008MoneyforNothing.pdf, sheets 29-38) going for consultancy at the moment, but it’s less of a fit than with a software development project. Time and material is safest for the coach, of course, but also doen’t reward them for doing their work better than the competition. How should we do this? Jeff’s ‘money back guarantee’ if you don’t double your velocity is a nice marketing gimmick, but a big risk for us lesser gods: Is velocity a good measure for productivity and results? How do we measure it? How do we determine whether advice was followed?

Using freebies or discounts to customers to test new training material on was more generally in use. This has really helped us quickly improve our workshop materials, not to mention hone the training skills…

One later session was Nigel Baker’s. He did a session on the second day called ‘Scrumbrella’, on how to scale Scrum, and was doing a second one of those this third afternoon. I hadn’t made it to the earlier one, but had heard enthusiastic stories about it, so I decided to go and see what that was about. Nigel didn’t disappoint, and had a dynamic and entertaining story to tell. He made the talk come alive by the way he drew the organisational structures on sheets of paper, and moved those around on the floor during his talk, often getting the audience to provide new drawings.

There is no way I can do Nigel’s presentation style justice here. There were a number of people filming him in action on their cellphones, but I haven’t seen any of those movies surface yet. For now, you’ll have to make do with his blog-post on the subject, and some slides (which he obviously didn’t use during this session). I can, however, show you what the final umbrella looked like:

All my other pictures, including some more from the ScrumBrella session, and from the Open Space closing, can be found on flickr.

Closing Keynote by James Grenning on ‘Changing the world of work through Technical Excellence’

(slides are here)

The final event of the conference was the closing keynote by James Grenning. His talk dealt with ‘Technical Excellence’, and as such was very near my heart.

He started off with a little story, for which the slides are unfortunately not in the slide deck linked above, about how people sometimes come up to him in a bar (a conference bar, I assume, or pick-up lines have really changed in the past few years) and tell him: “Yeah, that agile thing, we tried that, it didn’t work”.

He would then ask them some innocent questions (paraphrased, I don’t have those slides, not a perfect memory):

So you were doing TDD?

No.

Ah, but you were doing ATDD?

No.

But surely you were doing unit testing?

Not really.

Pair programming?

No.

Continuous Integration?

Sometimes.

Refactoring?

No!

At least deliver working software at the end of the sprint?

No…

If you don’t look around and realise that to do Agile, you’ll actually have to improve your quality, you’re going to fail. And if you insist on ignoring the experience of so many experienced people, then maybe you deserve to fail.

After this great intro, we were treated to a backstage account of the way the Agile Manifesto meeting at Snowbird went. And then about what subjects came up after this year’s reunion meeting. James showed the top two things coming out of the reunion meeting:

We believe the agile community must:

  1. Demand Technical Excellence
  2. Promote individual [change] and lead organizational change

The rest of his talk was a lesson in doing those things. He first went into more detail on Test Driven Development, and how it’s the basis for improving quality.
To do this, he first explains why Debug-Later-Programming (DLP) in practice will always be slower than Test-First/TDD.

The Physics of Debug-Later-Programming

DLP vs TDD

Mr. Grenning went on to describe the difference between System Level Tests and Unit Tests, saying that the System Level Tests suffer from having to test the combinatorial total of all contained elements, while Unit Tests can be tailored by the programmer to directedly test the use of the code as it is intended, and only that use. This means that, even though System Level Tests can be useful, they can never be sufficient.
Of course, the chances are small that you’ll write sufficient and complete Unit Tests if you don’t do Test Driven Development, as Test-After always becomes Release First. Depending on Manual Testing for testing is a recipe for unmaintainability.

The Untested Code Gap

The keynote went on to talk about individual and organisational change, and what Developers, Scrum Masters and Managers can do to improve things. Developers should learn, and create tested and maintainable code. Scrum Masters should encourage people to be Problem Solvers, not Dogma Followers. He illustrated this with the example of Planning Poker. As he invented Planning Poker, him saying that you shouldn’t always use it is a strong message. For instance, if you want to estimate a large number of stories, other systems can work much better. Managers got the advice to Grow Great Teams, Avoid De-Motivators, and Stop Motivating Your Team!

DeMotivators

It was very nice to be here for this talk, validating my own stance on Technical Excellence, and teaching me new ways of talking to people in order to help them see the advantages of improving their technical practices. Oh, and some support for my own discipline in strictly sticking to TDD…

Article on Oracle and Agile posted (Dutch)

My article on using Agile on projects using Oracle technology was published in this months Optimize magazine, and the full text (in Dutch!) is now available on the Qualogy website.

The article is mainly meant as an introduction to agile principles and practices for Oracle developers that are new to them. It focused on explaining short feedback loops, and the importance of technical practices such as Continuous Integration and (Unit) testing for success with iterative and incremental development.

Scrum Gathering London 2011 – day 2

First day report here.

Steve Denning on Radical Management
(slides are here)

The second day of the London Scrum Gathering started with a Keynote by Steve Denning. Where yesterday’s workshop had been about his storytelling work, today he was full-on about Radical Management. He quickly sketched the main points of his book (which everyone got given in their goody bag, so I have two now…), and it’s relation and implications for all the Agile aficionados in the audience.

The message was not all that positive, in a way. Steve described how many companies were, mostly slowly, but often surely, managing themselves to death. For an audience full of Scrum Coaches, the way in which this is happening did not really come as a big surprise: distance between the people doing the work, and the customer, hierarchical and divisive management, and simply not enough focus on Steve’s golden rule: delight your customer!

One of the main implications of this keynote was a worrying one: Steve told some stories (of course:-) on companies that had tried to change, but couldn’t. Some individual Ford manufacturing plants, for instance, tried to work in different ways, got splendid results, but were ordered to stop being silly, and conform to the ‘Ford Way’ of doing things. He stated that for those companies, the only solution was to wait for a generation of managers to leave the company, or for the company to be left behind, to die by natural (market) selection. The Borders ways, so to speak, though Steve was pitting Wal-mart as the walking dead against Amazon’s vibrant business future.

So what happens when you tell a big audience of change agents that in many cases change will not succeed? You get a nice set of questions after your talk, for a start. One of the questions (the one asked by me:-) was: “taking the difficulty of changing existing managers as a given, what can be done to avoid new generations of managers to adopt the same thinking that the current generation has? Can we work with education institutions to try and change the ways new batches of MBAs think?” The answer was that he is indeed working on that, both with the Harvard Business Review, as directly with some education institutes. A glimmer of hope, to end the keynote with.

Tobias Mayer on Dogma Free Scrum
(slides are here)

Next stop was a talk by Tobias Mayer. Tobias has caused some controversy in the last year by denouncing his Scrum Alliance certifications (insert link!). Because of that, I was a little surprised, but certainly happy, to see that he would be talking at this Gathering. The subject was ‘Dogma Free Scrum’. Tobias started by showing some pictures from different sources describing Scrum. This was to illustrate that even with something small and fairly well defined, people will have very different interpretations of what this thing is. He then talked about his own ideas of what Scrum means. His model is certainly simple, and based on the PDCA loop:

But this is so generic as a picture, and so doesn’t really describe what he thinks Scrum is. To complete that, he uses the following list of five principles:

Having given us his view, Tobias then asked us to split up into groups and draw our own picture of Scrum on an A1 (flip-over) paper, keeping those principles in mind. Once we’d done that, we passed our pictures over to the next group, where we continued working on our neighbours’ pictures. This gave a nice insight in how we all see the same subject differently, and communicate it differently.

The next part of this talk took the idea of communication and interpretation further, talking about creating a Collaboration Space. The idea is that many, if not most, issues in a company can be traced to not making clear enough requests. This can be alleviated by focusing on making a Collaboration Space, in which the person making the request can collaborate with the people to whom the request is made, so that common understanding can be reached. The Requester focuses on making the ‘Why’ of the request as clear as possible, while the Responder focuses on the ‘How’. Together they determine the ‘What’.

Joseph Pelrine on Cynefin – Making Sense of Agile

Joseph Pelrine started out his presentation characterising the experience of listening to a talk by Dave Snowden, the originator of Cynefin, as getting hosed by the full blast of a fireman’s spout. He promised us that his talk was intended to bring that down to a more familiar British level of water pressure.

Having just heard Snowden talk at the Lean and Kanban Benelux conference in Antwerp, I can certainly attest that some further explanation is very useful… Joseph elaborated on the nature of complexity, and talked about how these concepts are usually only understood at a very shallow level within the Agile community. (actually quoting a tweet of mine to prove the point:-) He explained that the different areas in the Cynefin model should be handled differently.

He also mentioned that finding out what style to use is not as simple as saying: “Oh, software development is a complex system, so we have to handle anything within software development as complex.” Though the whole of software development is complex, there are plenty of things within the software development process that are not, and are within the complicated or simple areas. He had some figures on how different activities are distributed between the different areas.

The slides for this talk haven’t appeared on-line yet, but he does have an excellent paper on this subject out: On Understanding Software Agility – A Social Complexity Point Of View

If you are interested in this subject, make sure to also read Snowden’s HBR article “A Leader’s Framework for Decision Making”, and browse around on Cognitive Edge’s website for more background.

Mike Bassett (and Roman Pichler) on Lessons Learned from becoming Agile at Electronic Arts

The last talk of the day was an experience report from Electronic Arts. Mike Bassett described how they adopted agile for the development of a physics engine used in many EA games. The talk was mostly done my Mike, with Roman offering intermittend comments along the way.

Mike started with an overview of what they were building (the Physics Engine), and how this fit within the EA company. They had been a separate company, taken over by EA. Most EA games had been using other Physics Engines, and they’d been incrementally improving their product so that it would become attractive to be used in various games. Before their agile adoption, they’d been driven by many different clients at the same time, and had many problems delivering to all of those on time, and with quality. He also described that concomitant to the nature of the work, their people were highly intelligent, independent, and not naturally inclined to spend much time on social niceties.

Then he described how they changed things around to regular releases in close communication with one customer, one game, where other customers/games could use the finished product as well, and take advantage of the new features developed for this priming game. So for instance the new FIFA 2011 soccer game would have specific requests for functionality, and would be the ones providing feedback to the Physics Engine team. But the Ice Hockey game also in development would use the same version as the Soccer game, and get profit from that new functionality.

Mike also presented some metrics. Unfortunately, I haven’t found his slides anywhere on-line yet, so I can’t give a good overview of those metrics. As I recall, many of the expected improvements were there, but there were also a number of surprise outliers. Hopefully, the slides will be released soon so we can look at those numbers in more detail.

There’s one more post coming up about #sglon, talking about the third and final day. That day was an all day Open Space event, and a fantastic closing keynote by James Grenning.

5 ways to make sure your sprint velocity is a useless number

Velocity always seemed a nice and straightforward concept to me. You measure how much you get done in a certain period of time, and use that to project how much you’ll probably get done in the same amount of time in the future. Simple to measure, enables empirical planning, simple to use in projections and planning. Measuring influences the work, though.

The concept of velocity is almost always used, even within companies that are still new to an Agile way of working. But simple though it seems, there are many ways velocity can lose its usefulness. I happen to think velocity is one of the better metrics, but if you’re not measuring it correctly, or misinterpreting the resulting numbers, it can become a hurdle to good planning.

Let’s have a look at some of the ways velocity doesn’t work, and how to avoid them.

Not a Number

First of all, velocity is not just a number. It’s always a range, or an average with error margins. Why is this important? Because if you do your planning based on a single number, without taking into account the normal variation in productivity that is always there, you can be sure your planning is not giving you a realistic idea of what will be done when.

In other words: realise that your planning is an estimation of when you think a certain set of work can be done. An estimation should always include uncertainty. That uncertainty is, at least partially, made explicit by taking the variance of your velocity into account.

Velocity charted with a confidence level around it

Velocity charted with a confidence level around it

The simplest way to get pessimistic and optimistic values for velocity is to simply take the average of the three lowest and the three highest of the last ten sprints. Another way is to use a mathematical confidence level calculation. I don’t actually think there’s much difference between the two. Charting velocity in this way can get you graphs such as the one shown above.

Then, of course, you have to actually use this in your release planning.

Release forecast using variation in velocity

Release forecast using variation in velocity

Not an average

I know I just talked about it not being an average, but this is different. Another way in which the averaging of points finished in a sprint can cause problems, is if it doesn’t actually mean ‘points finished in a sprint’. Quite often, I’ve met teams that have a lot of trouble finishing stories within their sprints. The causes of that can be many, with stories simply being too large on top. Sometimes these teams have correctly realised that if they’ve only finished part of a story, they don’t get partial ‘credit’ for this in the Sprint’s velocity. But then they do take credit of the full number of story-points for the entire story in the subsequent sprint, once they’ve actually finished the story.

Average?

Average?

So here we can see what happens then. The average is around 20. So should this team plan 20 story-point worth of work into their next sprint? Probably not a good idea, right? If the variation in velocity is very high, there is usually a problem.

What one could do in this instance is re-estimate any unfinished stories so that only the work actually done in the later sprint for those stories is calculated for those sprints. Yes, you’ll ‘lose’ some points that you estimated are don’t seem to count anywhere as work done. But you’ll immediately get a more realistic figure for your velocity, and an immediate reason to make those stories smaller, as they simply won’t fit in a sprint if the velocity is realistic.

For release planning, you’ll not be depending on a weird fluctuation of velocity any more, but on a more dependable figure with less variation.

Variable Sprint Length

If you change the length of your sprints around, velocity will not be very useful. But, I can hear you say, we can just calculate the expected velocity for a 2 week sprint by taking two-thirds of the velocity of a 3 week sprint! That would be nice, but unfortunately it doesn’t work like that. The regular rhythm of sprints creates certain expectations within the team. The team learns how much it can take in, in such a period. Also, the strict time-box of an agreed sprint length is very useful in bringing existing limitations into view.

Bring problems to the surface

Bring problems to the surface

The famous ‘lowering the waters brings the rocks to the surface‘ picture of lean waste elimination is a useful way to view this.

Estimating In Time

If someone asks me how long I’m going to take to do a particular piece of work, I’ll normally answer saying it will take a certain amount of time. This is quite natural, and answers fairly directly the question posed. When someone asks me when I can have such a particular piece of work ready, again I could answer by giving a specific date and time.

If someone asks me how much work I can do in a work week, though, I might be tempted to answer: “40 hours”. And I would probably be right! And if I would then, at the end of that week, look back, and see how much time did I actually work, it would probably not be too far off those 40 hours. But I wouldn’t learn much from that observation.

By using the concept of ‘Story Points’, an abstract measure for estimation, we can still estimate the effort for a certain piece of work. And if we then give other pieces of work an estimation in Story Points, relative to the story we already estimated, we have created a new measurement system! So for instance if ‘Allowing a user to log-in’ is 3 Story Points, then ‘sending a user a password reminder’ could be 5, if it’s about  (but not quite) twice as big.

Of course, in the end you will want to relate those abstract Story Points back to time, since you will often want to determine when you can release a bit of software. But you don’t estimate that, you measure that: It turns out in on sprint, we can do about 12 Story Points, give or take a few. So if that’s the case, we will be able to release functionality X by at the latest, date Y (see the release planning graph earlier).

Some people do the same type of trick by using ‘ideal days’ to estimate, and determining the ‘focus factor’, or percentage they were actually managing to get done. Mathematically this works OK, but it’s very hard for people to let go of their feeling of ‘when it will be done’, and estimate is ‘real’ ideal days.

Including bug-fix time in your velocity

I’ve noticed that this one can be a bit controversial, but it’s an important factor in the usefulness of  your velocity figure.

As a team, you will encounter work that is not part of creating new functionality from your product owners wishlist. Often, this work presents itself in the form of fixing defects found in your software. Most of the time, those defects exist in the software because in an earlier sprint some new functionality was added.

Now it can be that such a defect is discovered, and needs to be fixed right away, because it truly interferes with a customer’s use of your system. Those types of defect are usually not estimated, but certainly not taken into account when calculating your velocity for a certain sprint.

Other defects are less critical, and will/should be planned (prioritised by your Product Owner) to be taken into a sprint. Those types of defects sometimes are estimated, but still should not be taken into account when calculating your velocity!

Why not? Well, if you see the goal of your team as delivering new software for the Product Owner, then a defect is simply a way in which some work delivered was not completely done. Usually not done in the form of not sufficiently tested. Fixing such a defect is of course very important. But it is slowing you down from the primary goal of delivering new functionality! But adding the points for fixing the defect to your velocity would make it seem that you are not going any slower (maybe even faster!). So it would give a false impression of the speed in which you’re getting the work the Product Owner wants, done, and might skew release planning because of that.

Also, it would means that your improvements in quality, which you’ve been working so hard on, will not be visible in your velocity. Now, is that right?

Code Cleaning: A Refactoring Example In 50 Easy Steps

One of the things I find myself doing at work is looking at other peoples code. This is not unusual, of course, as every programmer does that all the time. Even if the ‘other people’ is him, last week. As all you programmers know, rather often ‘other people’s code’ is not very pretty. Partly, this can be explained because every programmer knows, no one is quite as good at programming as himself… But very often, way too often, the code really is not all that good.

This can be caused by many things. Sometimes the programmers are not very experienced. Sometimes the pressure to release new features is such that programmers feel pressured into cutting quality. Sometimes the programmers found the code in that state, and simply didn’t know where to start to improve things. Some programmers may not even have read Clean Code, Refactoring, or the Pragmatic Programmer! And maybe no one ever told them they should.

Recently I was asked to look at a Java codebase, to see if it would be possible for our company to take that into a support contract. Or what would be needed to get it to that state. This codebase had a number of problems, with lack of tests, lots of code duplication and a very uneven distribution in complexity (lots of ‘struct’ classes and the logic that should be in them spread out, and duplicated, over the rest). There was plenty wrong, and sonar quickly showed most of them.

Sonar status

When discussing the issues with this particular code base, I noticed that the developers already knew quite a few of the things that were wrong. They did not have a clear idea of how to go from there towards a good state, though. To illustrate how one might approach this, I spent a day making an example out of one of the high complexity classes (cyclomatic complexity of 98).

Larger examples of refactoring are fairly rare out there, so I figured I’d share this. Of course, package and class names (and some constants/variables) have been altered to protect the innocent.

I’d like to emphasize that none of this is very special. I’m not a wizard at doing this, by any standard. I don’t even code full time nowadays. That’s irrelevant: The point here is precisely that by taking a series of very simple and straightforward steps, you can improve your code tremendously. Anyone can do this! Everyone should…

I don’t usually shield off part of my posts under a ‘read-more’ link, but this post had become HUGE, and I don’t want to harm any unsuspecting RSS readers out there. Please, do read the whole thing. And: Let me (and my reader and colleagues) know how this can be done better!

Contine reading

Scrum for a management team

Scrum (or something that looks like it) can be used for things besides software development projects. Just look at the interest for ‘Scrum Beyond Software’ last fall. But there are some things that need to be taken into account when doing so. So what are those, when applying Scrum for a distributed management team? This is a report on our experiences doing just that.

The company

Our company, Qualogy, is based in The Netherlands. It is a consultancy company specialised in Java and Oracle technologies that has been using Scrum and Agile both internally and for customers for a while, now. Our company has a daughter company based in Suriname, a former Dutch colony where Dutch is a national language, and where Software Development project are still a relatively rare event. This daughter company is primarily meant to serve the local Suriname market, using local talent, at local prices. Next to that we also do small-scale outsourcing, where the lack of a language barrier (and mostly of a culture barrier) with The Netherlands is a major advantage.

We’ve been using Scrum in Suriname with the development teams there, with good success. A combination of local and remote coaching has worked well to get the teams familiar with the process working on location at Suriname customers, and internally for both Dutch and Suriname customers. This was successful enough that when we found the management team not working in the same close step with the central management team, we quickly had the idea of trying to Scrum with the Management team as well!

Situation in the management team

While visiting the Suriname office, a number of issues had been raised, both by the local management team as well as by the visiting central management team. Some of the problems encountered were:

  • lack of transparency, both within the local team as towards the central office
  • lack of progress in certain areas
  • problem resolution was difficult and often required interventions

Apart from specific issues at the outset, there are also differences between your average development team adopting Scrum and a (distributed) management team. When researching this, I came across some discussions. Some are new versions of familiar issues when starting a new Scrum implementation. Some were entirely new for me.

  • The management team has very diverse skills and areas of responsibility
  • The operational management has to deal with a lot of interruptions
  • Managers may be even more averse than developers of having their progress ‘checked’ (visible)
  • Explicit priorities are much more likely to be interpreted as ‘micro-managing’

Creating a backlog

To get started with our Management Scrum, we started by formulating a backlog. The backlog was, at least on a higher level, quite clear. This was actually an interesting learning point for me: the translation from high level business goals to specific actions is much more direct on the management level, and the number of stakeholder is much more limited (in this case: one). The Product Owner for the team was the manager responsible for the Suriname division.

The backlog immediately revealed one of the issues mentioned above, where in a large number of cases the backlog item/story was in effect already assigned to a particular team member (sales manager, operations manager, …). Team members had specific skills (or networks) that enabled them to pick-up a story. We took this as a given, understanding that it would be an issue to overcome as far as team work is concerned, and allowed the pre-assignment of stories.

Then, since the product owner would only occasionally be present in Suriname, the product backlog had to be available on-line. For this, we used pivitol tracker. I always try to avoid using electronic planning boards, but for this situation it was appropriate. Pivitol is a very nice tool, with purposefully limited customisation options.

With the initial backlog ready, we moved on to estimation. I had explained the concept of Story Points before, but the team wasn’t quite comfortable to use those. Additionally, the problem of ‘unsharable stories’ due to the different areas of work mentioned above, meant that it would probably also be hard to come to a good velocity figure in Story Points. This resulted in us adopting ‘ideal days’ and focus factor as a way of managing reality based planning. The team estimated the backlog items, and split some up into more manageable chunks in their first planning meeting.

And then the first Sprint could start. Almost. There still was some discussion on the length of the Sprint. The PO initially wanted sprints of four weeks, or one month. After sizing the backlog, it became clear that there was an advantage to keeping the items on the backlog small, and together with feedback from some of the development teams in the company he was convinced that a shorter Sprint length would be beneficial. So we arrived at two weeks.

First Sprint

The work started well, with items getting picked-up by all team members. We started with weekly ‘stand-up’ (which was done through Skype) meeting, because a daily schedule was considered too intensive, and also feared to be too invasive: there was some fear of micro-management from within the team.

The first couple of meetings surfaced a few issues. Though some stories had been finished, quite a few were stuck in ‘in-progress’. A sure sign something was up! We discussed those stories, and the reasons for the delays. One reason was simply that other things had to be done first. The team was assured that they could always add items to the backlog themselves, as long as the PO was notified so he could prioritise. And it was normal that the day-to-day business would take time, and we’d have to take that into account in determining our velocity.

Another reason for slow progress on stories was that for some of the stories we found that the story was not clearly enough defined. For the PO the large size attributed to these stories by the team had been a surprise during the backlog sizing. He hadn’t wanted to push for a lower estimate, though, since that should be team choice. This issue was solved by the PO pairing-up with the team member to formulate an outline together (of the document this story was about), which automatically lead to a good way to split-up the user story, after which the team could continue autonomously.

As you can see from the above, we encountered a number of (fairly familiar) issues while working on the first sprint. So when we held our first retrospective meeting, we already had some improvements made.

The major points that came out of the retro were a wish of more mutual transparency: the members of the team wanted to have more information on what the others were doing, and how far items had progressed. To accommodate this, we resolved to start with a daily stand-up, to make sure items were actually moved on the pivitol board as soon as they were finished, and to split-up work in smaller increments.

With the second sprint in progress, and some good results already, we are quite happy with the way Scrum is turning out for the team. We are having mostly good results, with more progress being made on strategic projects, and more visibility (and appreciation) for local issues. The team was particularly struck with the fact that with the new levels of transparency and communication, even given their the different areas of expertise, it has become easy and normal for them to pick up parts of each other’s work regularly.

Kanban for a Recruitment Agency

At Qualogy, where I work, we have a team that acts as a recruitment agency for freelancers towards our customers, so that we can help our customers fill open positions even when we don’t have a suitable internal candidate. This team’s main goal is to find suitable candidates as quickly as possible, offer them to the customer, and help both to come to a mutually acceptable agreement.

The team of three that takes care of this work was looking for ways to improve the service to their customers. That meant being quicker to offer possible candidates to customers, but also to provide faster feedback towards (rejected) candidates.

As a way to achieve this, we decided to try a basic Kanban system, with the process from receiving the request from a customer to either an accepted candidate, or all candidates rejected and notified.

We started out by charting the existing process and identifying the parts that were in our control, and those that are not. There were obvious places where we would simply be waiting for a response from a customer (‘We do/don’t want to talk to this candidate’, and ‘We do/don’t want to take on this candidate’). This made the initial sketch of the process look as follows:

Process sketch

Process sketch

We played around a bit to fit this partially conditional flow on a Kanban board. Initially we had the idea of a linear board with each state in our process mapped to a column, where columns could be skipped if they were not applicable. This left us a bit unsatisfied, as part of the agreed process was not really explicit. By looking at  the whole of the process as consisting of two parts, first finding candidates, and secondly offering them to customers, we got closer to a working solution. Then by making the difference in flow for accepted (at least for an interview) and rejected candidates explicit on the board, with the token from the first part of the process (finding candidates) cut in half between candidates offered, and those rejected, we ended-up with the following structure:

Initial Kanban

Initial Kanban

As part of the use of Kanban, we also wanted to keep track of how long it takes to go through the stages of the process. As is recommended practice, we added the time for relevant states to each token:

Cards with dates

Cards with dates

Making the process explicit, by setting-up the board and clearly agreeing on what the work in the different stages should consist of, is the first basic step when starting with Kanban. It gives a lot of clarity for the team, and makes it easier (possible) to start discussing improvements in process with a common understanding.

One thing that anyone familiar with Kanban will find puzzling is that we didn’t specify any WIP limits on the board. We did discuss doing that, of course, but we decided to wait. One approach to setting the WIP limits would have been to start with the number of people in the team, and use that as a starting point. It is very normal in this type of work for one person to be handling multiple cases, though, and no one was sure how many. Another approach is to simply look at a normal week, and see the maximum number of items there are in any given column during that week. Though we started with this approach, demand was too low to be able to come up with any useful figures, so for now we are still WIP-less.

After some initial apprehension on the introduction of the board, the reactions to working with the Kanban were very positive. The perception was that there was more clarity as to what was in progress, and what needed to be picked up. The daily stand-up in front of the board took some getting used to, but also helped with the division of work amongst the team members. The clear state and frequent communication was seen as a very good way to be able to fill-in for colleagues when they were away.

After one month of this process, we held a retrospective to see whether everything was going well, and what we could improve. The retrospective turned up some interesting points.

On the positive side, better communication, both within the team as towards customers and candidates, was seen as a very positive point. The visual process had also much improved keeping all the administrative tasks up-to-date, which had previously often been collecting into large backlogs.

On the improvements side, there was worry about tasks that do not fit into the standard process, as those are not visible, and thus parts of the work that people do is not visible. A further point was that after an interview at a customer, sometimes it would take a long time before the customer reacted, and thus a long time before we could provide any feedback towards the candidate. This would create extra work, as the candidates would start calling regularly to request status updates (a very good example of failure demand).

As a result of the retrospective, the team decided to annotate tokens that were in the ‘Interview’ column with dots, which would be added at each stand-up meeting. And after a set amount of dots (which equals to a set amount of time since the interview), contact would be made pro-actively with the customer, and after that with the candidate, to keep everyone up-to-date.

Another agreement from the retrospective was that any status change would always be immediately updated on the board. This would make taking over any tasks by another team member much easier. The team already thought this had improved much in the new process, and wanted to further improve in that regard.

Finally, it was agreed to put up a separate board for non-applicant work, so that that could also be tracked. This would be a generic ‘to-do, in progress, done’ style board for any other type of work. The reason there was suddenly a different type of work for the team was a need from another department in the company, due to illness there. The work was completely unrelated to the process discussed above, and the best we could come up with was using a separate board (or separate ‘lane’ on the current board, but that didn’t work out with the widescreen board we were using). This doesn’t help much in showing if specific items are delayed by the ‘extra’ work, but at least it does show what work is in progress at any time, indicating a ‘total’ delay.

The next step for this team will be in finding out what could be useful WIP-limits (Work-In-Progress limits, which are agreements on how many items can be in the same state on the Kanban board, which should expose any bottlenecks quickly). So far the demand has been low enough that there has been no real opportunity to find out where the bottlenecks are. It’s expected that as the market becomes more active, the first bottlenecks will be automatically revealed.