Top Gear: A New Refactoring Kata

For the last five or six years, I’ve been using coding exercises during job interviews. After talking a little with a candidate I open my laptop, call up an editor, and we sit together to do some coding.

My favourite exercise for this is a refactoring kata that I came up with. I’ve always found it more interesting how people deal with bad code they encounter than any small amount of code that can be written in this kind of short period.

The form of the kata is very much inspired by the ‘Gilded Rose’ kata, but it’s intentionally smaller so that it’s possible to get to a point where tests can be written and the code refactored in a period of about an hour, hour and a half.

The code is supposed to be the code of a automatic transmission. Someone has built it, but it was probably (hopefully!) never released. You are asked to make a few improvements so that the gear box can be made more energy efficient in the future. This is the description:

The code that we need to work in looks like this:

I’ve made Java, PHP and ruby versions available in my github repository: https://github.com/wouterla/TopGearKata

If you add a language, let me know!

Don’t Refactor. Rebuild. Kinda.

I recently had the chance to speak at the wonderful Lean Agile Scotland conference. The conference had a very wide range of subjects being discussed on an amazingly high level: complexity theory, lean thinking, agile methods, and even technical practices!

I followed a great presentation by Steve Smith on how the popularity of feature branching strategies make Continuous Integration difficult to impossible. I couldn’t have asked for a better lead in for my own talk.

Which is about giving up and starting over. Kinda.

Learning environments

Why? Because, when you really get down to it, refactoring an old piece of junk, sorry, legacy code, is bloody difficult!

Sure, if you give me a few experienced XP guys, or ‘software craftsmen’, and let us at it, we’ll get it done. But I don’t usually have that luxury. And most organisations don’t.

When you have a team that is new to the agile development practices, like TDD, refactoring, clean code, etc. then learning that stuff in the context of a big ball of mud is really hard.

You see, when people start to learn about something like TDD, they do some exercises, read a book, maybe even get a training. They’ll see this kind of code:

Example code from Kent Beck's book: "Test Drive Developmen: By Example"

Example code from Kent Beck’s book: “Test Drive Development: By Example”

Then they get back to work, and are on their own again, and they’re confronted with something like this:

Code Sample from my post "Code Cleaning: A refactoring example in 50 easy steps"

Code Sample from my post “Code Cleaning: A refactoring example in 50 easy steps”

And then, when they say that TDD doesn’t work, or that agile won’t work in their ‘real world’ situation we say they didn’t try hard enough. In these circumstances it is very hard to succeed. 

So how can we deal with situations like this? As I mentioned above, an influx of experienced developers that know how to get a legacy system under control is wonderful, but not very likely. Developers that haven’t done that sort of thing before really will need time to gain the necessary skills, and that needs to be done in a more controlled, or controllable, environment. Like a new codebase, started from scratch.

Easy now, I understand your reluctance! Throwing away everything you’ve built and starting over is pretty much the reverse of the advice we normally give.

Let me explain using an example.

Contine reading

Agile 2015 Talk: Don’t Refactor. Rebuild. Kinda.

Monday, August 3, I had the opportunity to give a talk at the Agile Alliance’s Agile 2015 conference in Washington, D.C. My first conference in the US, and it was absolutely fantastic to be able to meet so many people I’d only interacted with on mailing lists and twitter. It was also a huge conference, with about 17 concurrent tracks and 2200 participants. I’m sure I’ve missed meeting as many people as I did manage to find in those masses. IMG_20150803_135914

Even with that many tracks, though, there were still plenty of people that showed up for my talk on Monday afternoon. So many that we actually had to turn people away. This is great, I’ve never been a fire hazard before. I was a bit worried beforehand. With my talk dealing with issues of refactoring, rebuild and legacy code,  it was a little unnerving to be programmed against Michael Feathers…

My talk is about how we have so much difficulty teaching well known and proven techniques from XP, such as TDD and ATDD, and some of the evolved ones like Continuous Delivery. And that the reason for that could be that these things are so much more difficult to do when working with legacy systems. Especially if you’re still learning the techniques! At the very least, it’s much more scary.

I then discuss, grounded in the example of a project at the VNU/Pergroep Online Services, how using an architectural approach such as the Strangler Pattern, combined with process rules from XP and Continuous Delivery, can allow even a team new to them to surprisingly quickly adopt these techniques and grow in proficiency along with their new code.

Rebuilding. Kinda.

Slides

The slides of my talk are available on slideshare, included below.

 

I’ll devote a separate post in a few weeks to give the full story I discuss here. In the mean time…

If you missed the talk, perhaps because you happened to be on a different continent, I’ll be reprising it next Wednesday at the ASAS Nights event, in Arnhem. I’d love to see you there!

I'm speaking at ASAS Nights 2 september

Outside in, whatever’s at the core

I haven’t written anything on here for quite a while. I haven’t been sitting still, though. I’ve gone independent (yes, I’m for hire!) and been working with a few clients, generally having a lot of fun.

I was also lucky enough to be able to function as Chet’s assistent (he doesn’t need one, which was part of the luck:-) while he was giving the CSD course at Qualogy, recently. Always a joy to observe, and some valuable reminders of some basics of TDD!

One of those basics is the switch between design and implementation that you regularly make when test-driving your code. When you write the first test for some functionality, you are writing a test against a non-existing piece of code. You might create an instance of an as-yet non-existing class (Arranging the context of the test), call a non-existent method on that class (Acting on that context), and then calling another non-existing method to verify results (Asserting). Then, to get the test to compile (but still fail), you create those missing elements. All that time, you’re not worrying about implementation, you’re only worrying about design.

Later, when you’re adding a second test, you’ll be using those same elements, but changing the implementation of the class you’ve created. Only when a test needs some new concepts will the design again evolve, but those tests will trigger an empty or trivial implementation for any new elements.

So separation of design and implementation, a good thing. And not just when writing micro-tests to drive low-level design for new, fresh classes. What if you’re dealing with a large, legacy, untested code base? You can use a similar approach to discover your (future…) design.

Contine reading

DevOps and Continuous Delivery

If you want to go fast and have high quality, communication has to be instant, and you need to automate everything. Structure the organisation to make this possible, learn to use the tools to do the automation.

There’s a lot going on about DevOps and Continuous Delivery. Great buzzwords, and actually great concepts. But not altogether new. But for many organisations they’re an introduction to agile concepts, and sometimes that means some of the background that people have when arriving at these things in the natural way, through Agile process improvement, is missing. So what are we talking about?

DevOps: The combination of software developers and infrastructure engineers in the same team with shared responsibility for the delivered software

Continuous Delivery: The practice of being able to deliver software to (production) environments in a completely automated way. With VM technology this includes the roll-out of the environments.

Both of these are simply logical extensions of Agile and Lean software development practices. DevOps is one particular instance of the Agile multi-functional team. Continuous Delivery is the result of Agile’s practice of automating any repeating process, and in particular enabled by automated tests and continuous integration. And both of those underlying practices are the result of optimizing your process to take any delays out of it, a common Lean practice.

In Practice

DevOps is an organisational construct. The responsibility for deployment is integrated in the multi-functional agile team in the same way that requirement analysis, testing and coding were already part of that. This means an extension to the necessary skills in the teams. System Administrator skills, but also a fairly new set of skills for controlling the infrastructure as if it were code with versioning, testing, and continuous integration.

Continuous Delivery is a term for the whole of the process that a DevOps team performs. A Continuous Delivery (CD) process consists of developing software, automating testing, automating deployment, automating infrastructure deployment, and linking those elements so that a pipeline is created that automatically moves developed software through the normal DTAP stages.

So both of these concepts have practices and tools attached, which we’ll discuss in short.

Practices and Tools

DevOps

Let’s start with DevOps. There are many standard practices aimed at integrating skills and improving communication in a team. Agile development teams have been doing this for a while now, using:

  • Co-located team
  • Whole team (all necessary skills are available in the team)
  • Pairing
  • Working in short iterations
  • Shared (code, but also product) ownership
  • (Acceptance) Test Driven Development

DevOps teams need to do the same, including the operations skill set into the team.

One question that often comes up is: “Does the entire team need to suddenly have this skill?”. The answer to that is, of course, “No”. But in the same way that Agile teams have made testing a whole team effort, so operations becomes a whole team effort. The people in the team with deep skills in this area will work together with some of the other team members in the execution of tasks. Those other will learn something about this work, and become able to handle at least the simpler items independently. The ops person can learn how to better structure his scripts, enabling re-use, from developers. Or how to test and monitor the product better from testers.

An important thing to notice is that these tools we use to work well together as a team are cross-enforcing. They enforce each-other’s effectiveness. That means that it’s much harder to learn to be effective as a team if you only adopt one or two of these.

Continuous Delivery

Continuous Delivery is all about decreasing the feedback cycle of software development. And feedback comes from different places. Mostly testing and user feedback. Testing happens at different levels (unit, service, integration, acceptance, …) and on different environments (dev, test, acceptance, production). The main focus for CD is to get the feedback for each of those to come as fast as possible.

To do that, we need to have our tests run at every code-change, on every environment, as reliable and quickly as possible. And to do that, we need to be able to completely control deployment of and to those environments, automatically, and for the full software stack.

And to be able to to that, there are a number of tools available. Some have been around for a long time, while others are relatively new. Most especially the tools that are able to control full (virtualised) environments are still relatively fresh. Some of the testing tooling is not exactly new, but seems still fairly unknown in the industry.

What do we use that for?

You’re already familiar with Continuous Integration, so you know about checking in code to version control, about unit tests, about branching strategies (basically: try not to), about CI servers.

If you have a well constructed CI solution, it will include building the code, running unit tests, creating a deployment package, and deploying to a test environment. The deployment package will be usable on different environments, with configuration provided separately. You might use tools such the cargo plugin for deployment to test (and further?), and keep a versioned history of all your deployment artefacts in a repository.

So what is added to that when we talk about Continuous Delivery? First of all, there’s the process of automated promotion of code to subsequent environments: the deployment pipeline.

pipeline

This involves deciding which tests to run at what stage (based on dependency on environment, and runtime) to optimize a short feedback loop with as detailed a detection of errors as possible. It also requires decisions on which part of the pipeline to run fully automatic, and where to still assume human intervention is necessary.

Another thing that we are newly interested in for the DevOps/CD situation is infrastructure as code. This has been enabled by the emergence of virtualisation, and has become manageable with tools such as Puppet and Chef. These tools make the definition of an environment into code, including hardware specs, OS, installed software, networking, and deployment of our own artefacts. That means that a test environment can be a completely controlled systems, whether it is run on a developer’s laptop, or on a hosted server environment. And that kind of control removes many common error situations from the software delivery equation.

Unit Testing JavaScript with QUnit and Mockjax

I’ve been experimenting a bit with JavaScript. My lack of real knowledge of the language, apart from some simple DOM-manipulations, is starting to become embarrassing!

So a couple of months ago I decided I should pick up the JS axe, and do some chopping. And the first step to guiding yourself into any new programming language is the selection of (or writing of…) the unit testing framework!

My first choice was qunit. I’d decided that I’d stick close to the jquery way of doing things, to begin with, so this seemed a logical choice. I’m very much used to automated build systems, so my first steps, after getting the most basic unit test running in a browser, was of automating the testing. This was not as easy as I had hoped! Having to start a full-blown webbrowser during a build is frowned upon. It requires, apart from plenty of time, that the build server has a graphical UI running and available, and is rather prone to errors. Setting-up something like Rhino is easy enough, but will fail as soon as we need to do things like DOM manipulation. Luckily, there turned out to be a reasonable compromise: using PhantomJS.

PhantomJS is a full WebKit browser, but one that is completely headless! This means you can have it load web-pages, which are fully functional, without needing to have a UI visible. It’s completely scriptable from JavaScript, and runs very quickly. Great! The PhantomJS pages even included some examples on how to use it with qunit.

Of course, as any JavaScript I write is usually part of a Java project, and I had actually written a little (atmosphere based) server for my test project, I wanted to run this from a maven build. I found the phantomjs-qunit-runner maven plugin, and thought I was all set to go!

But it wasn’t that easy… The maven plugin worked fine, but had trouble understanding javascript libraries loaded during testing. Since my tests involved mocking out the service I was using (they were supposed to be unit tests, after all!) I could not manage to run them using the phantomjs-qunit-plugin.

It took me a few attempts to understand how the maven plugin dealt with making separate JavaScript files available to PhantomJs, but I finally managed to make it work.

If you are going to try anything from this post, make sure that you have a version of the phantomjs-qunit-runner that has my changes! As of writing, that means checking out trunk, and building it yourself.

From this point on, everything is easy!

We  start with a maven pom.xml that sets up the phantomjs runner:

    <build>
        <finalName>GoalServer</finalName>
        <plugins>
            <plugin>
                <groupId>net.kennychua</groupId>
                <artifactId>phantomjs-qunit-runner</artifactId>
                <version>1.0.12-SNAPSHOT</version>
                <configuration>
                    <jsSourceDirectory>src/main/javascript/</jsSourceDirectory>
                    <jsTestDirectory>src/test/javascript/</jsTestDirectory>
                    <ignoreFailures>false</ignoreFailures>
                    <phantomJsExec>/home/wouter/opt/phantomjs/bin/phantomjs</phantomJsExec>
                    <libraries>
                        <directory>src/test/javascript/lib/</directory>
                        <includes>
                            <include>**/*.js</include>
                        </includes>
                    </libraries>
                </configuration>
                <executions>
                    <execution>
                        <phase>test</phase>
                        <goals><goal>test</goal></goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

You can see that I’ve stuck to the standard maven directory structure, keeping my javascript in the src/main/javascript and its tests in src/test/javascript. You do need to specify where the phantomjs executable is installed. This is slightly unfortunate, and should in a real project be delegated to a configuration setting (in the maven settings.xml, probably). For an example, having it hard-coded is clearer.
The part of this that I added is the libraries tag, where you use the default maven fileset syntax to define all the libraries you want to have available when executing the tests. In my codebase, I put all the javascript libraries in src/test/javascript/lib, but an argument could be made to put these somewhere outside of your src dirs. The plugin doesn’t care, as the fileset is translated to fully qualified paths before handing things over to PhantomJS.

I must admit that my goals weren’t set very high for my first test. After all, this was to be my first javascript test! So it turned out like this:

test("Test Test", function() {
    console.log("Testing test");
    equal(1, 0, "equal!");
});

Very exciting! And indeed, it failed:

[ERROR] Failed to execute goal net.kennychua:phantomjs-qunit-runner:1.0.12-SNAPSHOT:test (default) on project GoalServer: One or more QUnit tests failed -> [Help 1]

Now if you look carefully, you might be able to fix that test yourself. I’ll leave it at that, because I was quick to move on to my next step, which involved calling a function in javascript code which was in another file, and not located in the test code directory.

test("Test true", function() {
   equal(1, GameScheduleClient.testing.isTrue(), "It s true");
});

This is calling the following mindbogglingly complex function in src/main/javascript/GameScheduleClient.js:

var GameScheduleClient = GameScheduleClient || {};

GameScheduleClient.testing = function() {
    return {
        isTrue : function() {
            return 1;
        }
    };
} ();

If this doesn’t work, take a look at the advice above, and ensure you have a version of the qunit-runner that includes the patches I did. Otherwise you’ll have to do what I did, and run around in circles for a day or so.

Next step is to be able to call a service, which we’ll mock with MockJax. I’m not going to explain all the details of how mockjax works, for that I suggest you read something from someone who actually understands this stuff. But as long as you’ve put the library in the right place, and use the right version of the maven plugin, the following code should work:

module("Mock Ajax", {
    setup: function () {
        $.mockjax({
            url:"/mockedservice",
            contentType:"text/json",
            responseText:[ { bla:"Test" }]
        });
    },
    teardown: function () {
        $.mockjaxClear();
    }
});
asyncTest("Test I get a mocked response from a service", function () {
    $.getJSON("/mockedservice", function (response) {
        ok(response, "There's no response!");
        equal(response.responseText.bla, "NotTest", "response was not Test");
        start();
    });
});

Note that there is no supporting javascript method that we’re actually testing here. The $.mockjax call sets up mockjax to respond to a (jquery) ajax call to the /mockedservice url with a response containing the string Test. The $.getJSON call is a regular jquery ajax call, and this test simply verifies that the response.

The test module has a separate setup and teardown, which are called for each test, as you’d expect in an xUnit type framework. The test must be an explicit asyncTest, that is started explicitly within that method.

And that, as they say, is all there is to it! All in all, qunit provides a simple interface for basic unit-testing. I’m now looking into Jasmine for a more elaborate set-up, and a little better integration with the maven build environment.

Technical Excellence: Why you should TDD!

Last Thursday, Januari 19, I gave a short talk at ArrowsGroup’s Agile Evangelists event in Amsterdam. Michiel de Vries was the other speaker, talking about the role of Trust in Agile adoptions. On my recommendation the organisers had changed the format of the evening to include two Open Space Technology sessions, right after each 20 minute talk, with the subject of the talk as its theme. This worked very well, and we had some lively discussions going on after both talks, with the 50 or so attendents talking in four or five groups. I’m happy it worked out, as it was the first time I facilitated an Open Space.

My own talk dealt with Technical Excellence, and in particular with Test Driven Development, and why it’s a good idea to use it. My talk was heavily inspired by James Grenning‘s closing keynote at the Scrum Gathering in London last year. He even allowed me to use some of his sheets, which were very important in getting the argument across. I’ll go through the main points of my talk below.

For me the big challenge was to manage to fit the most important parts of what I wanted to say within the 20 minutes I had available to me. As in software development, it turns out presentation writing is about making the result short, clear, and without duplication. The first attempt at this presentation took about 50 minutes to tell, and subsequent versions got shorter and shorter…

The complete set of slides is here. And this is how the story went:

I quickly introduced the subject by talking about Agile’s rise in popularity ever since the term was introduced ten years ago (and before, really, for the separate methods that already existed). About35% of companies reported using some form of Agile early last year, in a forrester report. Most of those companies are showing positive effects of this adoption. No matter what you think of the quality of investigation of the Standish Report, for their internally consistent measure of project success the improvements over the last ten years have been distinct.

That’s not the whole story, though. Many teams have been finding that their initial success hasn’t lasted. Completely apart from any other difficulties in getting Agile accepted in an organisation, and there are plenty of othere, this particular one is clearly recognisable.

There is a slowing of development. Usually due to having a lot of defects that come out in production, but also because of an increasing fragility of the codebase that means that any change you do can (and will…) have unforseen side-effects. One very frequent indicator that this is happening in your team is the increase in requests for ‘refactoring’ stories from the team. By this they usually mean ‘rework’ stories, where major changes (‘clean-up’) is needed for them to work more comfortably,  and productively, with the code. In this situation the term ‘Technical Debt’ is also becoming a more familiar part of the vocabulary.

And that is why it is time to talk about technical excellence. And why the people who came up with this whole Agile thing were saying last year that encouraging technical excellence is the top priority! OK, they said ‘demand’, but I have less clout than those guys… They said other things, but this was a 20min. presentation, which is already too short for even this one. It is on top, though.

I further emphasised the need with a quote by Jeff Sutherland:

“14 percent, are doing extreme programming practices inside the SCRUM, and there is where we see the fastest teams: using the SCRUM management practice with the XP engineering practices inside.” – Jeff Sutherland, 2011

Since Jeff was nice enough to mention XP, Extreme Programming, this allows us to take a look at what those XP Engineering practices are.

We’re mainly talking about the inner circle in the picture, since those are the practices that are most directly related to the act of creating code. The second circle of Hell^H^H^H^H XP is more concerned with the coordination between developers, while the outer ring deals with coordination and cooperation with the environment.

From the inner circle, I think Test Driven Development is the most important one. The XP practices are designed to reinforce each other. One good way to do Pair Programming, for instance is by using the TDD development cycle to drive changing position: First one developer writes a test, then the other writes a piece of implementation, doeas some refactoring, and writes the next test, after which he passes the keyboard back to the first, who starts through the cycle again. And, since you are  keeping each-other honest, it becomes easier to stick to the discipling of working Test Driven.

TDD is, according to my humble opinion, a little more central than the others, even if only because it is so tightly woven with Refactoring and Simple Design. So let’s talk further about TDD!

Test Driven Development has a few different aspects to it, which are sometimes separately confused with the whole of TDD. I see these aspects:

  • Automated Tests
  • Unit Tests
  • Test-First
  • The Red, Green, Refactor process

I’ve had plenty of situations where people were claiming to do TDD, but it turned out their tests were not written before the production code, and were at a much higher level than you’d want for unit tests. Well, let’s look at those different aspects of TDD, and discuss why each of them is a good idea.

Why Automated Tests?

This is where some of the wonderful slides of James Grenning come in. In fact, for a much better discussion of these graphs, you should read his blog post! If you’re still with me, here’s the short version: Every Sprint you add new functionality, and that new functionality will need to be tested, which takes some fraction of the time that’s needed to implement it. The nasty thing is that while you’re adding new functionality, there is a good chance of breaking some of the existing functionality.

This means that you need to test the new functionality, and *all* functionality built in previous sprints! Testing effort grows with the age of your codebase.

Either you skip parts of your testing (decreasing quality) or you slow down. It’s good to have choices. So what happens is that pretty soon you’re stuck in the ‘untested code gap’, and thus stuck in the eternal firefighting mode that it brings.

 

This seems to be a great time to remind reader that the primary goal of a sprint is to release complete, working, tested software. And if that doesn’t happen, then you will end-up having to do that testing, and fixing, at a later date.

To try to illustrate this point, I’ve drawn this in a slightly suggestive form of a… waterfall!

Why Unit Tests?

So what is the case for unit tests? Let me first assure you that I will be the last to tell you that you shouldn’t have other types of tests! Integration Tests, Performance Test, Acceptance Tests, … And certainly also do manual exploratory testing. You will never think over everything beforehand. But. Those tests  should be added to the solid base of code covered by unit tests.

Combinatorial Complexity

Object Oriented code is built out of objects that can have a variety of inner states, and the interactions between those objects. The possible different states of such interacting components grows huge very quickly! That means that testing such a system from the outside requires very big tests, that would not have easy access to all of the details of all possible states of all objects. Unit Tests, however are written with intimate knowledge of each object and its possible states, and can thus ensure that all possible states of the object are covered.

Fine Grained tests make it easier to pin-point defects

Perhaps obvious, but if you have small tests covering a few lines of code each, when a defect is introduced, the test will accurately point to the defect with a couple of lines of accuracy. An integration or acceptance test will perhaps show the defect, but still leaves tens, hundreds or thousands of lines of code to comb through in search of the defect.

Code documentation that stays in sync

Unit tests document the interfaces of objects by demonstrating how the developer means them to be used. And because this is code, the documentation cannot get out of sync without breaking the build!

Supports changing the code

Unit tests are crucial to your ability to do changes to code. Be they refactoring changes or added functionality, if you don’t have detailed tests they chances of introducing a defect are huge. And if you don’t have those tests, chances are you’ll be finding out about those shiny new defect in production.

Why Test-First?

Ok, so we need unit tests. But why write them before writing the production code? One of the arguments is simply that there’s a good chance you won’t write them if you’ve already written your production code. But let’s pretend, just for a minute, that we’re all very disciplined, and do write those tests. And that we wrote our code in such a way that we can write unit tests for it with a normal level of effort. Time for another of mr Grenning’s sheets!

The full story, again, can be found on his blog, but the short version is as follows. It’s been an accepted fact in software development for quite a long time that the longer the period between introducing a defect and finding out about it, the more expensive it will be to fix it. This is easy to accept intuitively: the programmer won’t remember as well what he was thinking when the error was made; He might have written code in the same general area; Other people might have made changes in that part of the code. (How dare they!)

Now how does Test-First help alleviate this? Simple: If you write the test first, you will notice the defect immediately when you introduce it! Yes, granted, this does not mean that you can’t get any defects into production, there are many different types of bugs. But you’ll avoid a huge percentage of them! Some research on this indicates differences of 50 to 90 percent reduction of bugs.  More links to various studies on TDD can be found on George Dinwiddie’s wiki.

Red, Green, Refactor

And now we get to the last aspect of TDD, as an actual process of writing/designing code. The steps are simple: write a small test (red), that fails; write the simplest code that makes the test pass (green), and then refactor that code to remove duplication and ensure clean, readable code. And repeat!

One thing that is important in this cycle, is that is is very short! It should normally last from 3 to 10 minutes, with most of the time actually spent in the refactoring step. The small size of the steps is a feature, keeping you from the temptation of writing too much production code which isn’t yet tested.

Clean code

Sticking to this process for writing code has some important benifits. Your code will be cleaner. Writing tests first means that you will automatically be writing testable code. Testable code has a lower complexity (see Keith Braithwaite’s great work in this area). Testable code will be more loosely coupled, because writing tests for tightly coupled code is bloody hard. Continuous refactoring will help keep the code tightly cohesive. Granted, this last one is still very much dependent on the quality of the developer, and his/her OO skills. But the process encourages it, and if your code is not moving towards cohesion, you’ll see duplication increasing (for instance in method parameters).

After last weeks presentation, in one of the Open Space discussions some doubt was expressed on whether a lower cyclomatic complexity is always an indication of better code. I didn’t have/take much time to go into that, and accidentally reverted to an invocation of authority figures, but it is a very interesting subject. If you look at the link to Keith Braithwaite’s slides, and his blog, you’ll notice that all the projects he’s discussing have classes with higher and lower complexity. The interesting part is that the distribution differs between code that’s under test and code that isn’t. He also links this to more subjective judgement of the codebases. This is a good indication that testing indeed helps reduce complexity. Complexity is by no means the only measure though, and yes, you can have good code that has a higher complexity. But a codebase that has a tendency to have more lower complexity and less higher complexity classes will almost always be more readable and easier to maintain.

Simple Design

XP tells you You Ain’t Gonna Need It. Don’t write code that is not (yet) absolutely necessary for the functionality that you’re building. Certainly don’t write extra functionality! Writing the tests first keeps you focused on this. Since you are writing the test with a clear objective in mind, it’s much less likely that you’ll go and add this object or that method because it ‘seems to belong there’.

Another phrase that’s popular is Don’t Repeat Yourself. The refactoring step in the TDD process has the primary purpose of removing duplication. Duplication usually means harder to maintain. These things keep your code as lean as is possible, supporting the Simple Design filosophy

Why Not?

So why isn’t everyone in the world working with this great TDD thing I’ve been telling you about? I hear many reasons. One of them is that they didn’t know about is, or were under the misconception that TDD is indeed just Test Automation, or Unit Testing. But in the Internet age, that excuse is a bit hard to swallow.

More often, the first reaction is along the lines of “We don’t have the time”.

I’ve just spent quite a bit of time discussing hoe TDD will normally save you time, so it doesn’t make much sense to spend more time discussing why not having time isn’t an argument.

But there is one part of this objection that does hold: getting started with TDD will take time. As with everything else, there’s a learning curve. The first two or three sprints, you will be slower. Longer, if you have a lot of code that wasn’t written with testing in mind. You can speed things up a bit by getting people trained, and getting some coaches in to help get used to it. But it will still take time.

Then later, it will save you time. And embarrasment.

Another objection I’ve encountered is ‘But software will always contain bugs!’.

Sure. Not as many as now, but I’m sure there will be bugs. Still, that argument makes as much sense to me as saying that we will always have deaths in traffic due to people crossing the road, even if we have traffic lights! So we might as well get rid of all traffic lights?

There will always be some people that fall ill, even if we inoculate most everyone against a disease. So should we stop the inoculations?

Nah.

Uhm… Yeah…

At the same time the strongest and the weakest of arguments. Yes, in many companies there are bosses who will have questions about time spent writing tests.

The thing is: you, as a developer are the expert. You are the one who can tell your boss why not writing the test will be slower. Will slow you down in the future. Why code that isn’t kept clean and well factored will bog you down into the unmaintainability circle of hell until the company decides to throw it all away and start over. And over. And over.

You are also the one that that guy is going to be standing at the desk of, telling you that he’s going to have to ask you to come in on Saturday. And Sunday. And you know what? That’s going to be your own fault for not not working better.

And in the end, if you’re unable to impress them with those realities, use the Law of Two Feet, and change companies. Really. You’ll feel so much better.

This last one is, I must admit, the most convincing of the reasons why people are not TDDing. Legacy code really is hard to test.

There’s some good advice out there on how to get started. Michael Feathers book. Working Effectively With Legacy Code is the best place to start. Or the article of the same name, to whet your appetite. And the new Mikado Method definitely has promise, though I haven’t tried that on any real code yet.

And the longer you wait to get started, the harder it’s going to be…

How?

I closed the presentation with two sheets on how to get to the point where you can deliver at high quality.

For a developer, it’s important to keep learning. There a lot of great books out there. And new ones coming out all the time. But don’t just read. Practice. At Qualogy I’ve been organising Coding Dojos, where we get together and work on coding problems, as an exercise. A lot of learning, but also a lot of fun to do! Of course, we’ve also all had a training in this area, getting Ron Jeffries and Chet Hendrickson over to teach us Agile Development Skills.

Make sure that when you give an estimate, that you include plenty of time for testing. We developers simply have a tendency to overlook this. And when you’re still new to TDD, make a little more room for that. Commit to 30% fewer stories for your next three sprints, while your learning. And when someone tries to exert some pressure to squeeze in just that one more feature, that you know you can only manage by lowering quality? Don’t. Say NO. Explain why you say no, say it well, and give perspective on what you can say yes to. But say no.

For a manager, the most important thing you can do is ensure that you consistently emphasize quality over schedule. For a manager this is also one of the most difficult things to do. We’ve been talking about the discipline required for developers to stick to TDD. This is where the discipline for the manager comes in. It is very easy to let quality slip, and it will happen almost immediately as soon as you start emphasizing schedule. You want it as fast as possible, but only if it’s solid (and SOLID).

You can encourage teams to set their own standards. And the encourage them to stick to them. You’ll know you’ve succeeded when the next time you slip up and put on a little pressure, they’ll tell you “No, we can’t do that and stick to our agreed level of quality.” Congrats:-)

You can send your people to the trainings that they need. Or get help in to coach them (want my number? :-)

You can reserve time for those practice sessions. You will benifit from them, no reason they have to be solely on private time.

Well, it’s nicely consistent. I went over my time when presenting this, and over my target word-count when writing it down. If you stuck with me till this end, thanks! I’m also learning, and short, clear posts are just as hard as short, clear code. Or short, clear presentations.

Scrum Gathering London 2011 – day 3

The third day of the London Scrum Gathering (day 1, day 2) was reserved for an Open Space session led by Rachel Davies and a closing keynote by James Grenning.

We started with the Open Space. I’d done Open Spaces before, but this one was definitely on a much larger scale than what I’d seen before. After the introduction, everyone that had a subject for a session wrote their subject down, and got in line to announce the session (and have it scheduled). With so many attendents, you can imagine that there would be many sessions. Indeed, there were so many sessions that extra spaces had to be added to handle all of them. The subject for the Open Space was to be the Scum Alliance tag-line: “Changing the world of work”.

I initially intended to go the a session about Agile and Embedded, as the organiser mentioned that if there wouldn’t be enough people to talk about the embedded angle, he was OK with widening the subject to ‘difficult technical circumstances’. I haven’t done much real embedded work, but was interested in the broader subject. It turned out, though, that there were plenty of interested parties into the real deal, so the talk quickly started drifting to FPGAs and other esoterica and I used the law of two feet to find a different talk.

My second choice for that first period was a session about getting involvement of higher management. This session, proposed by Joe Justice and Peter Stevens (people with overlapping subjects were merging their sessions), turned out to be  very interesting and useful. The group shared experiences of both successfully (and less successfully) engaging higher management (CxOs, VPs, etc.) into an Agile Change process. Peter has since posted a nice summary of this session on his blog.

My own session was about applying Agile and Lean-Startup ideas to the context of setting up a consultancy and/or training business. If we’re really talking about ‘transforming the world of work’, then we should start with our own work. My intention was to discuss how things like transparency, early feedback, working iteratively and incrementally could be applied for an Agile Coach’s work. My colleague and I have been working to try and approach our work in this fashion, and are starting to get the hang of this whole ‘fail early’ thing. We’ve also been changing our approach based on feedback of customers, and more importantly, not-customers. During the session we talked a little about this, but we also quickly branched off into some related subjects. Some explanation of Lean-Startup ideas was needed, as not everyone had heard of that. We didn’t get far on any discussion on using some of the customer/product development ideas from that side of things, though.

Some discussion on contracts, and how those can fit with an agile approach. Most coaches are working on a time-and-material basis, it seems. We have a ‘Money for nothing and your changes for free’ type contract (see http://jeffsutherland.com/Agile2008MoneyforNothing.pdf, sheets 29-38) going for consultancy at the moment, but it’s less of a fit than with a software development project. Time and material is safest for the coach, of course, but also doen’t reward them for doing their work better than the competition. How should we do this? Jeff’s ‘money back guarantee’ if you don’t double your velocity is a nice marketing gimmick, but a big risk for us lesser gods: Is velocity a good measure for productivity and results? How do we measure it? How do we determine whether advice was followed?

Using freebies or discounts to customers to test new training material on was more generally in use. This has really helped us quickly improve our workshop materials, not to mention hone the training skills…

One later session was Nigel Baker’s. He did a session on the second day called ‘Scrumbrella’, on how to scale Scrum, and was doing a second one of those this third afternoon. I hadn’t made it to the earlier one, but had heard enthusiastic stories about it, so I decided to go and see what that was about. Nigel didn’t disappoint, and had a dynamic and entertaining story to tell. He made the talk come alive by the way he drew the organisational structures on sheets of paper, and moved those around on the floor during his talk, often getting the audience to provide new drawings.

There is no way I can do Nigel’s presentation style justice here. There were a number of people filming him in action on their cellphones, but I haven’t seen any of those movies surface yet. For now, you’ll have to make do with his blog-post on the subject, and some slides (which he obviously didn’t use during this session). I can, however, show you what the final umbrella looked like:

All my other pictures, including some more from the ScrumBrella session, and from the Open Space closing, can be found on flickr.

Closing Keynote by James Grenning on ‘Changing the world of work through Technical Excellence’

(slides are here)

The final event of the conference was the closing keynote by James Grenning. His talk dealt with ‘Technical Excellence’, and as such was very near my heart.

He started off with a little story, for which the slides are unfortunately not in the slide deck linked above, about how people sometimes come up to him in a bar (a conference bar, I assume, or pick-up lines have really changed in the past few years) and tell him: “Yeah, that agile thing, we tried that, it didn’t work”.

He would then ask them some innocent questions (paraphrased, I don’t have those slides, not a perfect memory):

So you were doing TDD?

No.

Ah, but you were doing ATDD?

No.

But surely you were doing unit testing?

Not really.

Pair programming?

No.

Continuous Integration?

Sometimes.

Refactoring?

No!

At least deliver working software at the end of the sprint?

No…

If you don’t look around and realise that to do Agile, you’ll actually have to improve your quality, you’re going to fail. And if you insist on ignoring the experience of so many experienced people, then maybe you deserve to fail.

After this great intro, we were treated to a backstage account of the way the Agile Manifesto meeting at Snowbird went. And then about what subjects came up after this year’s reunion meeting. James showed the top two things coming out of the reunion meeting:

We believe the agile community must:

  1. Demand Technical Excellence
  2. Promote individual [change] and lead organizational change

The rest of his talk was a lesson in doing those things. He first went into more detail on Test Driven Development, and how it’s the basis for improving quality.
To do this, he first explains why Debug-Later-Programming (DLP) in practice will always be slower than Test-First/TDD.

The Physics of Debug-Later-Programming

DLP vs TDD

Mr. Grenning went on to describe the difference between System Level Tests and Unit Tests, saying that the System Level Tests suffer from having to test the combinatorial total of all contained elements, while Unit Tests can be tailored by the programmer to directedly test the use of the code as it is intended, and only that use. This means that, even though System Level Tests can be useful, they can never be sufficient.
Of course, the chances are small that you’ll write sufficient and complete Unit Tests if you don’t do Test Driven Development, as Test-After always becomes Release First. Depending on Manual Testing for testing is a recipe for unmaintainability.

The Untested Code Gap

The keynote went on to talk about individual and organisational change, and what Developers, Scrum Masters and Managers can do to improve things. Developers should learn, and create tested and maintainable code. Scrum Masters should encourage people to be Problem Solvers, not Dogma Followers. He illustrated this with the example of Planning Poker. As he invented Planning Poker, him saying that you shouldn’t always use it is a strong message. For instance, if you want to estimate a large number of stories, other systems can work much better. Managers got the advice to Grow Great Teams, Avoid De-Motivators, and Stop Motivating Your Team!

DeMotivators

It was very nice to be here for this talk, validating my own stance on Technical Excellence, and teaching me new ways of talking to people in order to help them see the advantages of improving their technical practices. Oh, and some support for my own discipline in strictly sticking to TDD…

Article on Oracle and Agile posted (Dutch)

My article on using Agile on projects using Oracle technology was published in this months Optimize magazine, and the full text (in Dutch!) is now available on the Qualogy website.

The article is mainly meant as an introduction to agile principles and practices for Oracle developers that are new to them. It focused on explaining short feedback loops, and the importance of technical practices such as Continuous Integration and (Unit) testing for success with iterative and incremental development.

Code Cleaning: A Refactoring Example In 50 Easy Steps

One of the things I find myself doing at work is looking at other peoples code. This is not unusual, of course, as every programmer does that all the time. Even if the ‘other people’ is him, last week. As all you programmers know, rather often ‘other people’s code’ is not very pretty. Partly, this can be explained because every programmer knows, no one is quite as good at programming as himself… But very often, way too often, the code really is not all that good.

This can be caused by many things. Sometimes the programmers are not very experienced. Sometimes the pressure to release new features is such that programmers feel pressured into cutting quality. Sometimes the programmers found the code in that state, and simply didn’t know where to start to improve things. Some programmers may not even have read Clean Code, Refactoring, or the Pragmatic Programmer! And maybe no one ever told them they should.

Recently I was asked to look at a Java codebase, to see if it would be possible for our company to take that into a support contract. Or what would be needed to get it to that state. This codebase had a number of problems, with lack of tests, lots of code duplication and a very uneven distribution in complexity (lots of ‘struct’ classes and the logic that should be in them spread out, and duplicated, over the rest). There was plenty wrong, and sonar quickly showed most of them.

Sonar status

When discussing the issues with this particular code base, I noticed that the developers already knew quite a few of the things that were wrong. They did not have a clear idea of how to go from there towards a good state, though. To illustrate how one might approach this, I spent a day making an example out of one of the high complexity classes (cyclomatic complexity of 98).

Larger examples of refactoring are fairly rare out there, so I figured I’d share this. Of course, package and class names (and some constants/variables) have been altered to protect the innocent.

I’d like to emphasize that none of this is very special. I’m not a wizard at doing this, by any standard. I don’t even code full time nowadays. That’s irrelevant: The point here is precisely that by taking a series of very simple and straightforward steps, you can improve your code tremendously. Anyone can do this! Everyone should…

I don’t usually shield off part of my posts under a ‘read-more’ link, but this post had become HUGE, and I don’t want to harm any unsuspecting RSS readers out there. Please, do read the whole thing. And: Let me (and my reader and colleagues) know how this can be done better!

Contine reading