5 ways to make sure your sprint velocity is a useless number

Velocity always seemed a nice and straightforward concept to me. You measure how much you get done in a certain period of time, and use that to project how much you’ll probably get done in the same amount of time in the future. Simple to measure, enables empirical planning, simple to use in projections and planning. Measuring influences the work, though.

The concept of velocity is almost always used, even within companies that are still new to an Agile way of working. But simple though it seems, there are many ways velocity can lose its usefulness. I happen to think velocity is one of the better metrics, but if you’re not measuring it correctly, or misinterpreting the resulting numbers, it can become a hurdle to good planning.

Let’s have a look at some of the ways velocity doesn’t work, and how to avoid them.

Not a Number

First of all, velocity is not just a number. It’s always a range, or an average with error margins. Why is this important? Because if you do your planning based on a single number, without taking into account the normal variation in productivity that is always there, you can be sure your planning is not giving you a realistic idea of what will be done when.

In other words: realise that your planning is an estimation of when you think a certain set of work can be done. An estimation should always include uncertainty. That uncertainty is, at least partially, made explicit by taking the variance of your velocity into account.

Velocity charted with a confidence level around it

Velocity charted with a confidence level around it

The simplest way to get pessimistic and optimistic values for velocity is to simply take the average of the three lowest and the three highest of the last ten sprints. Another way is to use a mathematical confidence level calculation. I don’t actually think there’s much difference between the two. Charting velocity in this way can get you graphs such as the one shown above.

Then, of course, you have to actually use this in your release planning.

Release forecast using variation in velocity

Release forecast using variation in velocity

Not an average

I know I just talked about it not being an average, but this is different. Another way in which the averaging of points finished in a sprint can cause problems, is if it doesn’t actually mean ‘points finished in a sprint’. Quite often, I’ve met teams that have a lot of trouble finishing stories within their sprints. The causes of that can be many, with stories simply being too large on top. Sometimes these teams have correctly realised that if they’ve only finished part of a story, they don’t get partial ‘credit’ for this in the Sprint’s velocity. But then they do take credit of the full number of story-points for the entire story in the subsequent sprint, once they’ve actually finished the story.

Average?

Average?

So here we can see what happens then. The average is around 20. So should this team plan 20 story-point worth of work into their next sprint? Probably not a good idea, right? If the variation in velocity is very high, there is usually a problem.

What one could do in this instance is re-estimate any unfinished stories so that only the work actually done in the later sprint for those stories is calculated for those sprints. Yes, you’ll ‘lose’ some points that you estimated are don’t seem to count anywhere as work done. But you’ll immediately get a more realistic figure for your velocity, and an immediate reason to make those stories smaller, as they simply won’t fit in a sprint if the velocity is realistic.

For release planning, you’ll not be depending on a weird fluctuation of velocity any more, but on a more dependable figure with less variation.

Variable Sprint Length

If you change the length of your sprints around, velocity will not be very useful. But, I can hear you say, we can just calculate the expected velocity for a 2 week sprint by taking two-thirds of the velocity of a 3 week sprint! That would be nice, but unfortunately it doesn’t work like that. The regular rhythm of sprints creates certain expectations within the team. The team learns how much it can take in, in such a period. Also, the strict time-box of an agreed sprint length is very useful in bringing existing limitations into view.

Bring problems to the surface

Bring problems to the surface

The famous ‘lowering the waters brings the rocks to the surface‘ picture of lean waste elimination is a useful way to view this.

Estimating In Time

If someone asks me how long I’m going to take to do a particular piece of work, I’ll normally answer saying it will take a certain amount of time. This is quite natural, and answers fairly directly the question posed. When someone asks me when I can have such a particular piece of work ready, again I could answer by giving a specific date and time.

If someone asks me how much work I can do in a work week, though, I might be tempted to answer: “40 hours”. And I would probably be right! And if I would then, at the end of that week, look back, and see how much time did I actually work, it would probably not be too far off those 40 hours. But I wouldn’t learn much from that observation.

By using the concept of ‘Story Points’, an abstract measure for estimation, we can still estimate the effort for a certain piece of work. And if we then give other pieces of work an estimation in Story Points, relative to the story we already estimated, we have created a new measurement system! So for instance if ‘Allowing a user to log-in’ is 3 Story Points, then ‘sending a user a password reminder’ could be 5, if it’s about  (but not quite) twice as big.

Of course, in the end you will want to relate those abstract Story Points back to time, since you will often want to determine when you can release a bit of software. But you don’t estimate that, you measure that: It turns out in on sprint, we can do about 12 Story Points, give or take a few. So if that’s the case, we will be able to release functionality X by at the latest, date Y (see the release planning graph earlier).

Some people do the same type of trick by using ‘ideal days’ to estimate, and determining the ‘focus factor’, or percentage they were actually managing to get done. Mathematically this works OK, but it’s very hard for people to let go of their feeling of ‘when it will be done’, and estimate is ‘real’ ideal days.

Including bug-fix time in your velocity

I’ve noticed that this one can be a bit controversial, but it’s an important factor in the usefulness of  your velocity figure.

As a team, you will encounter work that is not part of creating new functionality from your product owners wishlist. Often, this work presents itself in the form of fixing defects found in your software. Most of the time, those defects exist in the software because in an earlier sprint some new functionality was added.

Now it can be that such a defect is discovered, and needs to be fixed right away, because it truly interferes with a customer’s use of your system. Those types of defect are usually not estimated, but certainly not taken into account when calculating your velocity for a certain sprint.

Other defects are less critical, and will/should be planned (prioritised by your Product Owner) to be taken into a sprint. Those types of defects sometimes are estimated, but still should not be taken into account when calculating your velocity!

Why not? Well, if you see the goal of your team as delivering new software for the Product Owner, then a defect is simply a way in which some work delivered was not completely done. Usually not done in the form of not sufficiently tested. Fixing such a defect is of course very important. But it is slowing you down from the primary goal of delivering new functionality! But adding the points for fixing the defect to your velocity would make it seem that you are not going any slower (maybe even faster!). So it would give a false impression of the speed in which you’re getting the work the Product Owner wants, done, and might skew release planning because of that.

Also, it would means that your improvements in quality, which you’ve been working so hard on, will not be visible in your velocity. Now, is that right?

Avoid not trying

While preparing an introductory workshop on Scrum, we wanted to end our sections of presentation/retrospective with some general tips on the area discussed that would give a team that is starting out with scrum some help on things to try. And things better not to try.

I mean, Inspect and Adapt, yes, but it won’t hurt to avoid some common pitfalls.

Here’s the things we came up with, please let me know (below or on twitter) which ones you don’t agree with, and what important ones we missed!

User Stories

Try: Making stories small enough to be DONE within three days
Smaller also means easier to estimate, and easier to test. One of the most common things I find is Really Big User Stories. That makes everything hard.
Avoid: Working on less important stories before finishing more important ones
(De-)Prioritise ruthlessly before taking things into a sprint. During the sprint, don’t work on lower priority issues before the higher priority ones are done.
Try: Splitting stories vertically
If every story has a user facing component, (de-)prioritising parts of functionality becomes possibly. The earlier the user/customer can see the functionality, the sooner you can get feedback.
Avoid: Splitting stories by component
Delays getting feedback. Encourages work not directly related to functionality.
Try: Making stories specific by defining acceptance criteria for each one
You’ll know better what to do, how to estimate, how to test. And when you’ll be done.
Avoid: Making stories too detailed too early
You’ll add detail to stories in the course of the project, but doing it too early can mean 

  • working on something that’s not going to be used (in a while),
  • doing work that will need re-doing (once the customer sees the initial work, he will change his mind),
  • skewing your estimates: too much detail can inflate estimates beyond any realistic values.

Planning

Try: Estimating your complete release backlog with the full team
The whole team will gain understanding of what is expected. You’ll get better estimates. You can use a release burndown!
Of course, there are things that can help with this such as, ahum, having a clear vision, but you need to start somewhere.
Avoid: Not updating your estimates as you learn more
Estimates are estimates based on current understanding. If understanding doesn’t evolve during work, something is wrong. So estimates should also evolve. As you refine and split user stories, re-estimate them to evolve your planning along with your requirements.
Try: Fixed sprint length (of two weeks)

Fixed, for predictability, letting the team find a rhythm, ensuring problems (waste!) get raised. Two weeks, because one week is initially difficult for a team to do (but if you think you can, please try it!).
Avoid: Telling the team how much to take into sprint
You can’t expect a team to take responsibility for delivering if they don’t have control.
Try: Many (min. 6 – 10) small stories in a sprint
Failure to deliver the last story is much worse if it’s the only one. Or one of two. Smaller also means easier to estimate, and easier to test. It’s much easier to determine progress if you’re talking about ‘done’ stories, instead of percentages. (that was sarcasm, probably.)
Avoid: Stories that span multiple sprints
Just… don’t.
Try: Counting unplanned issues picked up in a sprint
If you get a lot of unplanned issues, you need to take that into account in your sprint planning. Count to get an idea of how much time you need to reserve for this!
Avoid: Picking up all unplanned issues raised during a sprint

    The PO should de-prioritise anything that is not a crucial customer problem, and then put them on the backlog to be planned in later sprints.
    Try: Reserving a fixed amount of time (buffer) per sprint for unplanned issues
    Measure how much time you’re spending on unplanned issues. Reserve that time for them (so your planned velocity goes down), and work on Structural fixes so this time reservation can go down in the future (after you measure you don’t need all of it).
    Avoid: Extending buffer for unplanned issues
    Because the buffer is there for a reason. To make sure that the rest of your time can be spent on what you’ve taken into the sprint. One way to deal with the buffer thing (to avoid getting tangled in time percentage calculations) is to have a rotating role in the team that deals with issues that come up. Call him Mr. Wolf, if you like, because it usually isn’t the most coveted role to play. That’s why you rotate…

    Scrum Master

    Try: Highly visible display of sprint & release burndowns in the team area
    Highly visible progress helps keeping focus. Whole team can see (and can feel responsible for) progress. And mostly, this is a great way to discuss any upcoming new issues with whoever is raising them: “Yes, I can see that this is important to you. Let’s look at what we’re working on right now, and what we need to delay to get that in…”
    Avoid: Only updating a computerised issue tracker when completing tasks or stories
    A physical task board provides continuous visibility and feedback. Seeing people moving things on a physical task board during the day simply encourages getting things done. Putting a post-it on a wall simply feels more real than putting a new issue into JIRA. There are so many ways in which the visible and physical are wired into our system, that there really is no way to replace that with a computerized tool.
    Try: Taking turns during stand-up by passing a token
    Sometimes stand-ups can devolve into a rote, going round, reporting status form. Break this by passing/throwing a token from one speaker to the next, in a self chosen order. This keeps things lively, avoids anyone dominating the stand-up, and makes people pay attention (or drop the ball:-).
    Avoid: Reporting to anyone but the Team during stand-up
    At all times avoid the stand-up becoming a ‘reporting to a project manager’ thing!
    Try: Having a retrospective at the end of every sprint
    The whole idea of Scrum is to continuously improve. You can’t do that if you don’t discuss how things went.
    Avoid: Not executing improvement experiments generated in the retrospectives
    Don’t just agree you need to improve. Do Something Already! At the end of the retro, agree which points you’re picking up, and ensure they’re taken care of in the next sprint. Also, with your action, try to indicate what the expected result of the action will be. Deciding whether your test was a success will be so much easier. Look into A3 problem solving when dealing with bigger issues. Or even with smaller ones.
    Try: Highly visible display of top 3 impediments
    And cross them off one by one as soon as they’re done…
    Avoid: Stories that span multiple sprints
    Yes. A bit obvious, perhaps, but this is happening often enough that I thought it worth mentioning.
    Try: Having an impediment backlog for the team and one for management
    Yes, impediments that managements should fix, should be just as visible (maybe even more so!)
    Avoid: Having a very long impediment backlog from which no items are ever picked up
    Agree what to pick up, don’t pick up too much at once (start with one at a time!), and FINISH them.

    Team

    Try: Making tasks small (< ½ day)
    Seeing people moving things around on a task board multiple times a day encourages getting things done. Smaller tasks are easier to understand, less chance of different understanding. Much easier to hand-over tasks, work together on stories.
    Avoid: Not moving any tasks (on the planning board) during the day
    Seeing people moving things around on a task board multiple times a day encourages getting things done. Lack of progress should be spotted as soon as possible, and help given.
    Try: Agreeing on a definition of done
    You should all agree on what ‘Done’ currently means. Once you can stick to that definition, you can start working on improving it.
    Avoid: An aspirational definition of done
    Did I emphasise ‘currently‘ enough? You need to know where you are, and that should give you a starting point…
    Try: Writing automated tests for any production issues
    This helps understanding and replicating the issue. And it ensures the issue will not come back. Having the tests documents understanding of code and functionality that was missed earlier.
    Avoid: Programming errors found after the sprint has ended
    A User Acceptance Test can find functionality the user didn’t expect (understanding). A UAT should never find expected functionality that does not work (quality).
    Try: Always doing a root cause analysis for any unplanned work
    Production problems are not normal! Find out why it happened, and see how you can change your process to avoid that type of problem in the future. Note: that means agreeing ‘Let’s not make this mistake in the future’ is not sufficient…
    Avoid: Not doing structural fix after root cause analysis
    The change should be structural, in your process. For instance:
    • ‘It was a simple programming error’, should result in changing you Definition of Done to require a higher code coverage for new code.
    • ‘There was a mistake during the deployment’ should result into ‘Let’s automate deployment’.
    • ‘We did two incompatible changes’ should result in ways to increase communication in the team, and better automated regression testing.

    Code Cleaning: How tests drive code improvements (part 1)

    In my last post I discussed the refactoring of a particular piece of code. Incrementally changing the code had resulted in some clear improvements in its complexity, but the end-result still left me with a unsatisfied feeling: I had not been test-driving my changes, and that was noticeable in the resulting code!

    So, as promised, here we are to re-examine the code as we have it, and see if when we start testing it more thoroughly. In my feeble defence, I’d like to mention again why I delayed testing. I really didn’t have a good feel of the intended functionality, and because of that decided to test later, when I hoped I would have a better idea of what the code is supposed to do. That moment is now.

    Again a fairly long post, so it’s hidden behind the ‘read more’ link, sorry!

    Contine reading

    Code Cleaning: A Refactoring Example In 50 Easy Steps

    One of the things I find myself doing at work is looking at other peoples code. This is not unusual, of course, as every programmer does that all the time. Even if the ‘other people’ is him, last week. As all you programmers know, rather often ‘other people’s code’ is not very pretty. Partly, this can be explained because every programmer knows, no one is quite as good at programming as himself… But very often, way too often, the code really is not all that good.

    This can be caused by many things. Sometimes the programmers are not very experienced. Sometimes the pressure to release new features is such that programmers feel pressured into cutting quality. Sometimes the programmers found the code in that state, and simply didn’t know where to start to improve things. Some programmers may not even have read Clean Code, Refactoring, or the Pragmatic Programmer! And maybe no one ever told them they should.

    Recently I was asked to look at a Java codebase, to see if it would be possible for our company to take that into a support contract. Or what would be needed to get it to that state. This codebase had a number of problems, with lack of tests, lots of code duplication and a very uneven distribution in complexity (lots of ‘struct’ classes and the logic that should be in them spread out, and duplicated, over the rest). There was plenty wrong, and sonar quickly showed most of them.

    Sonar status

    When discussing the issues with this particular code base, I noticed that the developers already knew quite a few of the things that were wrong. They did not have a clear idea of how to go from there towards a good state, though. To illustrate how one might approach this, I spent a day making an example out of one of the high complexity classes (cyclomatic complexity of 98).

    Larger examples of refactoring are fairly rare out there, so I figured I’d share this. Of course, package and class names (and some constants/variables) have been altered to protect the innocent.

    I’d like to emphasize that none of this is very special. I’m not a wizard at doing this, by any standard. I don’t even code full time nowadays. That’s irrelevant: The point here is precisely that by taking a series of very simple and straightforward steps, you can improve your code tremendously. Anyone can do this! Everyone should…

    I don’t usually shield off part of my posts under a ‘read-more’ link, but this post had become HUGE, and I don’t want to harm any unsuspecting RSS readers out there. Please, do read the whole thing. And: Let me (and my reader and colleagues) know how this can be done better!

    Contine reading

    Scrum for a management team

    Scrum (or something that looks like it) can be used for things besides software development projects. Just look at the interest for ‘Scrum Beyond Software’ last fall. But there are some things that need to be taken into account when doing so. So what are those, when applying Scrum for a distributed management team? This is a report on our experiences doing just that.

    The company

    Our company, Qualogy, is based in The Netherlands. It is a consultancy company specialised in Java and Oracle technologies that has been using Scrum and Agile both internally and for customers for a while, now. Our company has a daughter company based in Suriname, a former Dutch colony where Dutch is a national language, and where Software Development project are still a relatively rare event. This daughter company is primarily meant to serve the local Suriname market, using local talent, at local prices. Next to that we also do small-scale outsourcing, where the lack of a language barrier (and mostly of a culture barrier) with The Netherlands is a major advantage.

    We’ve been using Scrum in Suriname with the development teams there, with good success. A combination of local and remote coaching has worked well to get the teams familiar with the process working on location at Suriname customers, and internally for both Dutch and Suriname customers. This was successful enough that when we found the management team not working in the same close step with the central management team, we quickly had the idea of trying to Scrum with the Management team as well!

    Situation in the management team

    While visiting the Suriname office, a number of issues had been raised, both by the local management team as well as by the visiting central management team. Some of the problems encountered were:

    • lack of transparency, both within the local team as towards the central office
    • lack of progress in certain areas
    • problem resolution was difficult and often required interventions

    Apart from specific issues at the outset, there are also differences between your average development team adopting Scrum and a (distributed) management team. When researching this, I came across some discussions. Some are new versions of familiar issues when starting a new Scrum implementation. Some were entirely new for me.

    • The management team has very diverse skills and areas of responsibility
    • The operational management has to deal with a lot of interruptions
    • Managers may be even more averse than developers of having their progress ‘checked’ (visible)
    • Explicit priorities are much more likely to be interpreted as ‘micro-managing’

    Creating a backlog

    To get started with our Management Scrum, we started by formulating a backlog. The backlog was, at least on a higher level, quite clear. This was actually an interesting learning point for me: the translation from high level business goals to specific actions is much more direct on the management level, and the number of stakeholder is much more limited (in this case: one). The Product Owner for the team was the manager responsible for the Suriname division.

    The backlog immediately revealed one of the issues mentioned above, where in a large number of cases the backlog item/story was in effect already assigned to a particular team member (sales manager, operations manager, …). Team members had specific skills (or networks) that enabled them to pick-up a story. We took this as a given, understanding that it would be an issue to overcome as far as team work is concerned, and allowed the pre-assignment of stories.

    Then, since the product owner would only occasionally be present in Suriname, the product backlog had to be available on-line. For this, we used pivitol tracker. I always try to avoid using electronic planning boards, but for this situation it was appropriate. Pivitol is a very nice tool, with purposefully limited customisation options.

    With the initial backlog ready, we moved on to estimation. I had explained the concept of Story Points before, but the team wasn’t quite comfortable to use those. Additionally, the problem of ‘unsharable stories’ due to the different areas of work mentioned above, meant that it would probably also be hard to come to a good velocity figure in Story Points. This resulted in us adopting ‘ideal days’ and focus factor as a way of managing reality based planning. The team estimated the backlog items, and split some up into more manageable chunks in their first planning meeting.

    And then the first Sprint could start. Almost. There still was some discussion on the length of the Sprint. The PO initially wanted sprints of four weeks, or one month. After sizing the backlog, it became clear that there was an advantage to keeping the items on the backlog small, and together with feedback from some of the development teams in the company he was convinced that a shorter Sprint length would be beneficial. So we arrived at two weeks.

    First Sprint

    The work started well, with items getting picked-up by all team members. We started with weekly ‘stand-up’ (which was done through Skype) meeting, because a daily schedule was considered too intensive, and also feared to be too invasive: there was some fear of micro-management from within the team.

    The first couple of meetings surfaced a few issues. Though some stories had been finished, quite a few were stuck in ‘in-progress’. A sure sign something was up! We discussed those stories, and the reasons for the delays. One reason was simply that other things had to be done first. The team was assured that they could always add items to the backlog themselves, as long as the PO was notified so he could prioritise. And it was normal that the day-to-day business would take time, and we’d have to take that into account in determining our velocity.

    Another reason for slow progress on stories was that for some of the stories we found that the story was not clearly enough defined. For the PO the large size attributed to these stories by the team had been a surprise during the backlog sizing. He hadn’t wanted to push for a lower estimate, though, since that should be team choice. This issue was solved by the PO pairing-up with the team member to formulate an outline together (of the document this story was about), which automatically lead to a good way to split-up the user story, after which the team could continue autonomously.

    As you can see from the above, we encountered a number of (fairly familiar) issues while working on the first sprint. So when we held our first retrospective meeting, we already had some improvements made.

    The major points that came out of the retro were a wish of more mutual transparency: the members of the team wanted to have more information on what the others were doing, and how far items had progressed. To accommodate this, we resolved to start with a daily stand-up, to make sure items were actually moved on the pivitol board as soon as they were finished, and to split-up work in smaller increments.

    With the second sprint in progress, and some good results already, we are quite happy with the way Scrum is turning out for the team. We are having mostly good results, with more progress being made on strategic projects, and more visibility (and appreciation) for local issues. The team was particularly struck with the fact that with the new levels of transparency and communication, even given their the different areas of expertise, it has become easy and normal for them to pick up parts of each other’s work regularly.

    Scrum vs. Kanban: A Game of Life?

    I’ve been following some of the discussions on the differences between Scrum and Kanban. And learning more about Kanban, of course. One point that is emphasized a lot is that Kanban requires fewer up-front changes than Scrum does. The term “Big Change Up-Front” has even been coined, by Alan Shalloway.

    There’s certainly truth in that. Scrum doesn’t have many rules, but it is very strict in assigning a very limited set of roles and responsibilities. Kanban can be used with existing roles, as long as you make make sure you make the existing roles and policies explicit. Asking which one of those option is better is really beside the point. It simply depends on the context. In my situation, I usually get called in by companies who have already decided to ‘go Agile’, and as such are already part way through some of those changes. Of course, the changes are not always successful, but it doesn’t provide me with a change to start slowly.

    Interesting discussion, of course, and for me it brought to mind Conway’s Game of Life. For those unfamiliar with it, this is a cellular automaton game, where a set of rules iteratively executed over a set of cells (with a state of on or off), where all kind of interesting stable and continuously changing patterns can occur based on the initial pattern put on the board.

    Scrum could be compared with a fairly big and complex ‘breeder’ pattern, which needs to be placed on the board as a complete set. It’s quite an apt comparison in that the infinite growth belonging to such a pattern doesn’t happen if you get part of a breeder pattern wrong. And since you can see the Game of Life as a universal Turing machine, infinite growth means a continuous generation of information, which can be as continuous learning.

    Kanban can start with an existing stable pattern (an oscillator), and can tweak that to move, step by step, towards breeder status. At least, that’s how I see it.

    The analogy will probably break down after this initial thought, and might be made better if we move from comparing to patterns to comparing to rule-sets, but my brain started to hurt when I went along that route…

    Scrum vs. Kanban? Not really…

    Peter Stevens, over at 

    Scrum Breakfast has an interview up with Mary Poppendieck on Lean, Scrum, Kanban and Leadership. The part of the interview that caught my attention was a question on the relationship between Scrum, Kanban, and Lean in general.

    I like Mary’s response a lot, where she basically states that Scrum and Kanban each have their own strengths, and each is suited for their own specific set of circumstances.

    Scrum is basically a method of accomplishing work through cadenced iterations. Kanban is a method of accomplishing work through limiting work-in-process and managing flow. I have found that some work especially creative work is more effectively managed with iterations, while other work especially naturally sequential work is more naturally managed with Kanban. — Mary Poppendieck

    She also stresses that, whether you choose to use Scrum or Kanban, the point is that you keep improving on your way of working, so:

    Lean would have every company view these techniques as starting points that are constantly improved, so after a few years, Scrum and Kanban should evolve and change to something quite different than their starting point. — Mary Poppendieck

    This suits the way that I view these things very well. Use the tools most suited for the situation, and see where it leads you.

    Of course, the best way to choose is to try each, and measure results. Which brings us to another question: what and how do we measure? At the moment, I’m leaning towards flow (time for work to flow through the system), and Henrik Kniberg’s Happiness index. Getting that last one adopted anywhere is going to be an interesting challenge, though…

    Learning is key

    An old article I just came across, posits that learning is the thing of value in software development:

    When we present this hypothetical situation to students – many of them with 20+ years experience in building software – they typically respond with anywhere between 20% to 70% of the original time.  That is, rebuilding a system that originally takes one year to build takes only 2.5 to 8.5 months to build.  That’s a huge difference!  It’s hard to identify another single factor that could affect software development that much!

    The article goes on to discuss in detail how learning is enabled by agile processes. The advantages of quick feedback cycles are not just ‘fail early’, but also ‘learn early’ (and continuously).

    Agile Feedback Loops

    Agile Feedback Loops

    BDD intro: TDD++

    While looking for ways to make using Selenium nicer, wondering a bit through WebDriver descriptions and FitNesse explanations, I ran into this nice Introduction to Behaviour Driven Development.

    Dan North explains how he arrived at the ideas of BDD, originating with a desire to explain how to best do Test Driven Development in his teams, and arriving at this structured way of creating the tests that drive implementation.  Very illuminating, and worth reading if you take your testing seriously.

    It also made me browse on a bit, since getting decent tests running often involves all kinds of fixture nastiness to get a usable data-set in place. I found the Test Data Builder pattern promising, but I’ll have to use it to know for sure. When I do, I’ll probably use Make It Easy to, well, make it easier.


    Patterns for Splitting User Stories — Richard Lawrence

    Reading the Scrum Development mailing list is always a good for some inspiration. Today there was some discussion on how to split user stories. Next to some good examples in the mailthread, Charles Bradley also provided a link to Patterns for Splitting User Stories by Richard Lawrence.

    That blog post provides some very good guidelines on different ways to split-up user stories. I was also happy to see that he also finds that usually going above 8-13 points is a good indicator that a story needs to be split. The different ways to split stories may in some cases seem obvious, but not all of them are, and it’s very good to read such a complete list.