The three failures of Continuous Delivery

Everyone seems to want to get on the Continuous Delivery train. Rightfully so, I think. For most, though, it’s not an easy ride. From my work with client and conversations with other coaches there’s a few common barriers to adoption.

In the end, the goal should be to be able to react faster to the market. And, to be honest, to finally be in actual control. But in business terms, it’s about cycle times. That’s what allows you to not just react quickly to market circumstances, but to actively probe markets and test product ideas.

So, as I mentioned, there’s a few common problems companies run into. First, just the basic technical steps to create a fully automated pipeline. Then, getting the tests sorted to a level that gives enough confidence to deploy to production whenever they’ve run. Only when those technical matters have been sorted do we get to the more interesting issues of allowing the business to make use of the possibilities offered by the newfound agility. Those have their own challenges.

Let’s have a look at the the ways these particular subject give teams trouble. In the hope that being forewarned some will be able to avoid them. I’ll go into more detail on how to avoid them in subsequent posts.

Get a pipeline

Now, if you’ve paid any attention to the literature, you know that at the core, CD is all about important things like process and a culture of quality. Which is all true, but that probably won’t help you very much. Most development organisations have spent years wrapping themselves in workarounds and buffers all painstakingly created to prevent detection of their real problems. So taking a relatively small, technical, step in setting up a delivery pipeline at least seems somewhat feasible and will by its nature start showing where some of the real problems lie.

A Delivery Pipeline

From what I’ve seen, just trying to set up that pipeline is trouble enough. That’s why I’ve put it as the first barrier to adoption of CD. It may seem easy, but there turn out to be many basic technical challenges. Most teams go through those same pains, and it’s not really surprising. There’s quite a bit of (often new) knowledge and skills involved. And teams usually have to deal with all kinds of legacy code and infrastructure, which doesn’t make it any easier.

Mostly, what companies find here is that they are missing is skills. And there are a lot of skills involved! A real DevOps approach should include operations knowledge in a team, but even then most of the skills needed to create a modern, fully automated infrastructure are something that takes most organisations a long time to develop.
It’s not that these things are beyond those teams, it’s just that they’ve not had to deal with them before. Sure, it is easy enough to package your application in a docker container and run it locally, but people are discovering it is quite a different thing to build it out further than that.

Testing

Testing is the achilles heel of many development teams. Most agile teams work hard to get and keep their code under test. Many fail. The advantage that Continuous Delivery has is that it sets explicit expectations on quality. There’s really no room to skimp on testing if every push you do should end up on production.
Like was the case for Continuous Integration, testing is what makes a Delivery Pipeline useful. It’s great if you have fully automated deployment, but if you have no way to determine if the code you’re building can be trusted, you’ll still not be in production any sooner.
There’s different ways teams fail with testing. Insufficient unit testing. Too limited protocol and service testing. A reliance on slow and brittle end-to-end testing. Skipping manual / exploratory testing, that may no longer be a gateway before going into production but is still very much necessary.

Business

Organisations that manage to get past the first two hurdles have at their disposal a tool that can bring them unimagined business advantages. But even having come this far, existing silo’s, processes and political positioning prevent organisations from profiting from their newly found technical capabilities.

Symptoms of this can be found in the ignoring, or even complete lack, of market data in deciding on new products and functionality. In continuing a practice of long term planning, without built in checks to see if the intended goals are being achieved. In basing priorities on political influence instead of business goals. And even in a reluctance to release new features to users even once they’re available behind a feature toggle in production.

These issues can be the most difficult to address and need to be picked up at the highest management levels. They are attacked with changes in goal setting, reward systems, and organisational structure.

Interlocking pieces

As with any process, these different elements cannot exist for long without the others to support them. Testing withers if it cannot be run quickly and frequently enough. A delivery pipeline has little value if you have no way to know if you can trust the code that it’s building. And a highly evolved technical team that is not clearly and directly involved with business goals and customers will easily find more fulfilling work elsewhere.

That’s why my advice is to start in this order, picking up the next challenge as soon as there’s clear progress on the previous. You start building technical skills and then use that base as a flywheel to get a change in the rest of the company going.

Top Gear: A New Refactoring Kata

For the last five or six years, I’ve been using coding exercises during job interviews. After talking a little with a candidate I open my laptop, call up an editor, and we sit together to do some coding.

My favourite exercise for this is a refactoring kata that I came up with. I’ve always found it more interesting how people deal with bad code they encounter than any small amount of code that can be written in this kind of short period.

The form of the kata is very much inspired by the ‘Gilded Rose’ kata, but it’s intentionally smaller so that it’s possible to get to a point where tests can be written and the code refactored in a period of about an hour, hour and a half.

The code is supposed to be the code of a automatic transmission. Someone has built it, but it was probably (hopefully!) never released. You are asked to make a few improvements so that the gear box can be made more energy efficient in the future. This is the description:

The code that we need to work in looks like this:

I’ve made Java, PHP and ruby versions available in my github repository: https://github.com/wouterla/TopGearKata

If you add a language, let me know!

Don’t Refactor. Rebuild. Kinda.

I recently had the chance to speak at the wonderful Lean Agile Scotland conference. The conference had a very wide range of subjects being discussed on an amazingly high level: complexity theory, lean thinking, agile methods, and even technical practices!

I followed a great presentation by Steve Smith on how the popularity of feature branching strategies make Continuous Integration difficult to impossible. I couldn’t have asked for a better lead in for my own talk.

Which is about giving up and starting over. Kinda.

Learning environments

Why? Because, when you really get down to it, refactoring an old piece of junk, sorry, legacy code, is bloody difficult!

Sure, if you give me a few experienced XP guys, or ‘software craftsmen’, and let us at it, we’ll get it done. But I don’t usually have that luxury. And most organisations don’t.

When you have a team that is new to the agile development practices, like TDD, refactoring, clean code, etc. then learning that stuff in the context of a big ball of mud is really hard.

You see, when people start to learn about something like TDD, they do some exercises, read a book, maybe even get a training. They’ll see this kind of code:

Example code from Kent Beck's book: "Test Drive Developmen: By Example"

Example code from Kent Beck’s book: “Test Drive Development: By Example”

Then they get back to work, and are on their own again, and they’re confronted with something like this:

Code Sample from my post "Code Cleaning: A refactoring example in 50 easy steps"

Code Sample from my post “Code Cleaning: A refactoring example in 50 easy steps”

And then, when they say that TDD doesn’t work, or that agile won’t work in their ‘real world’ situation we say they didn’t try hard enough. In these circumstances it is very hard to succeed. 

So how can we deal with situations like this? As I mentioned above, an influx of experienced developers that know how to get a legacy system under control is wonderful, but not very likely. Developers that haven’t done that sort of thing before really will need time to gain the necessary skills, and that needs to be done in a more controlled, or controllable, environment. Like a new codebase, started from scratch.

Easy now, I understand your reluctance! Throwing away everything you’ve built and starting over is pretty much the reverse of the advice we normally give.

Let me explain using an example.

Contine reading

DevOps and Continuous Delivery

If you want to go fast and have high quality, communication has to be instant, and you need to automate everything. Structure the organisation to make this possible, learn to use the tools to do the automation.

There’s a lot going on about DevOps and Continuous Delivery. Great buzzwords, and actually great concepts. But not altogether new. But for many organisations they’re an introduction to agile concepts, and sometimes that means some of the background that people have when arriving at these things in the natural way, through Agile process improvement, is missing. So what are we talking about?

DevOps: The combination of software developers and infrastructure engineers in the same team with shared responsibility for the delivered software

Continuous Delivery: The practice of being able to deliver software to (production) environments in a completely automated way. With VM technology this includes the roll-out of the environments.

Both of these are simply logical extensions of Agile and Lean software development practices. DevOps is one particular instance of the Agile multi-functional team. Continuous Delivery is the result of Agile’s practice of automating any repeating process, and in particular enabled by automated tests and continuous integration. And both of those underlying practices are the result of optimizing your process to take any delays out of it, a common Lean practice.

In Practice

DevOps is an organisational construct. The responsibility for deployment is integrated in the multi-functional agile team in the same way that requirement analysis, testing and coding were already part of that. This means an extension to the necessary skills in the teams. System Administrator skills, but also a fairly new set of skills for controlling the infrastructure as if it were code with versioning, testing, and continuous integration.

Continuous Delivery is a term for the whole of the process that a DevOps team performs. A Continuous Delivery (CD) process consists of developing software, automating testing, automating deployment, automating infrastructure deployment, and linking those elements so that a pipeline is created that automatically moves developed software through the normal DTAP stages.

So both of these concepts have practices and tools attached, which we’ll discuss in short.

Practices and Tools

DevOps

Let’s start with DevOps. There are many standard practices aimed at integrating skills and improving communication in a team. Agile development teams have been doing this for a while now, using:

  • Co-located team
  • Whole team (all necessary skills are available in the team)
  • Pairing
  • Working in short iterations
  • Shared (code, but also product) ownership
  • (Acceptance) Test Driven Development

DevOps teams need to do the same, including the operations skill set into the team.

One question that often comes up is: “Does the entire team need to suddenly have this skill?”. The answer to that is, of course, “No”. But in the same way that Agile teams have made testing a whole team effort, so operations becomes a whole team effort. The people in the team with deep skills in this area will work together with some of the other team members in the execution of tasks. Those other will learn something about this work, and become able to handle at least the simpler items independently. The ops person can learn how to better structure his scripts, enabling re-use, from developers. Or how to test and monitor the product better from testers.

An important thing to notice is that these tools we use to work well together as a team are cross-enforcing. They enforce each-other’s effectiveness. That means that it’s much harder to learn to be effective as a team if you only adopt one or two of these.

Continuous Delivery

Continuous Delivery is all about decreasing the feedback cycle of software development. And feedback comes from different places. Mostly testing and user feedback. Testing happens at different levels (unit, service, integration, acceptance, …) and on different environments (dev, test, acceptance, production). The main focus for CD is to get the feedback for each of those to come as fast as possible.

To do that, we need to have our tests run at every code-change, on every environment, as reliable and quickly as possible. And to do that, we need to be able to completely control deployment of and to those environments, automatically, and for the full software stack.

And to be able to to that, there are a number of tools available. Some have been around for a long time, while others are relatively new. Most especially the tools that are able to control full (virtualised) environments are still relatively fresh. Some of the testing tooling is not exactly new, but seems still fairly unknown in the industry.

What do we use that for?

You’re already familiar with Continuous Integration, so you know about checking in code to version control, about unit tests, about branching strategies (basically: try not to), about CI servers.

If you have a well constructed CI solution, it will include building the code, running unit tests, creating a deployment package, and deploying to a test environment. The deployment package will be usable on different environments, with configuration provided separately. You might use tools such the cargo plugin for deployment to test (and further?), and keep a versioned history of all your deployment artefacts in a repository.

So what is added to that when we talk about Continuous Delivery? First of all, there’s the process of automated promotion of code to subsequent environments: the deployment pipeline.

pipeline

This involves deciding which tests to run at what stage (based on dependency on environment, and runtime) to optimize a short feedback loop with as detailed a detection of errors as possible. It also requires decisions on which part of the pipeline to run fully automatic, and where to still assume human intervention is necessary.

Another thing that we are newly interested in for the DevOps/CD situation is infrastructure as code. This has been enabled by the emergence of virtualisation, and has become manageable with tools such as Puppet and Chef. These tools make the definition of an environment into code, including hardware specs, OS, installed software, networking, and deployment of our own artefacts. That means that a test environment can be a completely controlled systems, whether it is run on a developer’s laptop, or on a hosted server environment. And that kind of control removes many common error situations from the software delivery equation.

The ‘Just Do It’ Approach To Change Management

Last Friday I gave a talk at the Dare 2013 conference in Antwerp. The talk was about the experiences I and my colleague Ciarán ÓNeíll have had in a recent project, in which we found that sometimes a very directive, Just Do It approach will actually be the best way to get people in an agile mindset.

Update: The full video of this talk as given on ‘Agile on the Beach’ is available on youtube.

This was surprising to us, to say the least, and so we’ve tried to find some theory supporting our experiences. And though theory is not the focus of this story, it helps if we set the scene by referencing two bits of theory that we think fits our experience.

Just Do It

A long time ago, in a country far away, there was this psychologist called William James, who wrote:

“If you want a quality, act as if you already have it.” – William James (1842-1910)

We often say that if you want to change your behaviour, you need to change your mind, be disciplined, etc. But this principle tells us that it works the other way around as well: if you change your behaviour this can change your thinking. Or mindset, perhaps?

For more about the ‘As If’ Principle, see the book by Richard Wiseman

Another piece of theory that is related is complexity thinking as embodied by the Cynefin framework. Cynefin talks about taking different actions when managing situations that are in different domains: simple, complicated, complex or chaos.

Cynefin Framework

The project

And in chaos, our story begins.

This particular project was a development project for a large insurance company. The project had already been active for over half a year when we joined. It was a bad case of waterfall, with unclear requirements, lots of silo’s, lots of finger pointing and no progress.

The customer got tired of this, and got in a high-powered project manager who was given far reaching mandate to get the project going. (ie. no guarantees, just get *something* done) This guy decided that he’d heard good things about this ‘Agile’ thing, and that it might be appropriate here as a risk-management tool. Which was where we came in.

And this wasn’t the usual agile transition, with its mix of proponents and reluctants, where you coach and teach, but also have to sell the process to large extend.

Here, everyone was external (to the customer), no-one wanted Agile, or had much experience with it, but the customer was demanding it! And taking full responsibility for delivery, switching the project to a time-and-material basis for the external parties.

A whole new ballgame.

Initial actions

We started out by getting everyone involved local. Up to then, people from four different vendors been in different locations, in different countries even. Roughly 60 people in all, we all worked from the office in Amsterdam. Most of these people had never met or even spoken!

We started with implementing a fairly standard Scrum process.

Step one was requiring multi-functional teams, mixing the vendors. This was tolerated. Mostly, I think, because people thought they could ignore it. Then we explained the other requirements. One week sprints, small stories (<2 / 3 days), grooming, planning, demo, retro. These things were all, in turn, declared completely impossible and certainly in our circumstances unworkable. But the customer demanded it, so they tried. And at the end of the first week, we had our first (weak) demo.

So, we started with basic Scrum. The difference was in the way this was sold to the teams. Or wasn’t.

That is not to say that we didn’t explain the reasons behind the way of working, or had discussions about its merit. It’s just that in the end, there was no option of not doing it.

And… It worked!

The big surprise to us was how well this worked. People adjusted quickly, got to work, and started delivering working software almost immediately. Every new practice we introduced, starting with testing within the sprint, met with some resistance, and within 4 to 6 weeks was considered normal.

After a while we noticed that our retrospectives changed from simply complaining about the process to open discussion about impediments and valuable input for improvements generated by our teams.

And that’s what we do all this for, right? The continuous improvement mindset? Scrum, after all, is supposed to surface the real problems.

Well. It sure did.

Automated testing

One of those problems was one which you will be familiar with. If you’ve been delivering software weekly for a while, testing manually won’t keep up. And so we got more and more quality issues.

We had been expecting this, and we had our answer ready. And since we’d had great success so far in our top-down approach, we didn’t hesitate much, and we started asking for automated testing.

Adoption

Resistance here was very high. Much more so than for other changes. Impossible! But we’d heard all those arguments before, and why would this situation be any different? We set down the rules: every story is tested, tests are automated, all this happens within the sprint.

the-princess-bride-inconceivable

And sure enough, after a couple of sprints, we started seeing automated tests in the sprint, and a hit in velocity recovered to almost the level we had had before.

See. It’s Simple! Just F-ing Do It!

Limitations

Then after another 3-4 sprints, it all fell apart.

Tests were failing frequently, were only built against the UI, had lots of technical shortcomings. And tests were built within the team, but still in isolation: a ‘test automation’ person built them, and even those were decidedly unconvinced they were doing the right thing.

In the end, it took us another 6 months to dig our way out of this hole. This took much coaching, getting extra expertise in, pairing, teaching. Only then did we arrive at the stop-the-line mindset about our tests that we needed.

Even with all of that going on, though we were actually delivering working software.

And we were doing that, much quicker than expected. After the initial delays in the project, the customer hadn’t expected to start using the system until… well, about now, I think. But instead we had a (very) minimal, but viable product in time for calculating the 2012 year-end figures. And while we were at it, since we could roll-out new environments at a whim (well… almost:-) due to our efforts in the area of Continuous Delivery, we could also do a re-calculation of the 2011 figures.

These new calculations enabled the company to free a lot of money, so business wise there’s no doubt this was the right thing to do.

But it also meant that, suddenly, we were in production, and we weren’t really prepared to deliver support for that. Well, we really weren’t prepared!

Kanban

And that brings us to one of the most invasive changes we did during the project. After about 5 months, we moved away from Scrum and switched to Kanban.

Just Do It

At that time I was the scrum master of one of the teams, the one doing all the operations work. And our changes in priority were coming very fast, with many requests for support of production. In our retros, the team were stating that they were at the same time feeling that nothing was getting done (our velocity was 0), and they felt stressed (overtime was happening). Not a good combination. This went on for a few sprints, and then we declared Kanban.

That’s not the way one usually introduces Kanban. Which is carefully, evolutionary, keeping everyone involved, not changing the process but just visualising it. You guys know how that’s supposed to be done right?

This was more along the lines: “Hey, if you can’t keep priorities stable for a week, we can’t plan. So we won’t.”

Of course, we did a little more than that. We carefully looked at the type of issues we had, and the people available to work on them. We based some initial WIP limits on that, as well as a number of classes of service. And we put in some very basic explicit policies. No interruptions, except in case of expedite items. If we start something, we finish it. No breaking of WIP limits. And no days longer than 8 hours.

Adoption

That brought a lot of rest to the team. And immediately showed better production. It also made the work being done much more transparent for the PO.

It worked well enough, that another team that was also experiencing issues with the planning horizon also opted to ‘go Kanban’. Later the rest of the teams followed, including the PO team.

Limitations

That is not to say there was no resistance to this change. The Product Owners in particular felt uncomfortable with it for quite some time. The teams also raised issues. All that generated many of those nice incremental, evolutionary changes. And still does. The mindset of changing your process to improve things has really taken root.

The most remarkable thing, though, about all that initial resistance was the direction. It was all about moving back to the familiar safety of… Scrum!

Wrap-up

I’d like to tell you more but this post is getting long enough already. I don’t have time to talk about our adventures with going from many POs to one, introducing Specification by Example, moving to feature teams, or our kanban ready board.

I do feel I need to leave you with some comforting words, though. Because parts of this story go against the normal grain of Agile values.

Directive leadership, instead of Servant Leadership? Top-Down change, instead of bottom-up support? Certainly more of a dose of Theory X than I can normally stomach!

And to see all of that work, and work quite well, is a little disconcerting. Yes, Cynefin says that decisive action is appropriate in some domains, but not quite in the same way.

And overcoming the familiar ‘That won’t work in our situation’ resistance by making people try it is certainly satisfying, but we’ve also seen that fail quite disastrously where deep skills are required. That needs guidance: Still no silver bullets.

Enlightened Despotism is a perhaps dangerous tool. But what if it is the tool that instills the habits of Agile thinking? The tool that forcibly shakes people out of their old habits? That makes the despot obsolete?

Practice can lead to mindset. The trick is in where to guide closely, and when to let go.

Unit Testing JavaScript with QUnit and Mockjax

I’ve been experimenting a bit with JavaScript. My lack of real knowledge of the language, apart from some simple DOM-manipulations, is starting to become embarrassing!

So a couple of months ago I decided I should pick up the JS axe, and do some chopping. And the first step to guiding yourself into any new programming language is the selection of (or writing of…) the unit testing framework!

My first choice was qunit. I’d decided that I’d stick close to the jquery way of doing things, to begin with, so this seemed a logical choice. I’m very much used to automated build systems, so my first steps, after getting the most basic unit test running in a browser, was of automating the testing. This was not as easy as I had hoped! Having to start a full-blown webbrowser during a build is frowned upon. It requires, apart from plenty of time, that the build server has a graphical UI running and available, and is rather prone to errors. Setting-up something like Rhino is easy enough, but will fail as soon as we need to do things like DOM manipulation. Luckily, there turned out to be a reasonable compromise: using PhantomJS.

PhantomJS is a full WebKit browser, but one that is completely headless! This means you can have it load web-pages, which are fully functional, without needing to have a UI visible. It’s completely scriptable from JavaScript, and runs very quickly. Great! The PhantomJS pages even included some examples on how to use it with qunit.

Of course, as any JavaScript I write is usually part of a Java project, and I had actually written a little (atmosphere based) server for my test project, I wanted to run this from a maven build. I found the phantomjs-qunit-runner maven plugin, and thought I was all set to go!

But it wasn’t that easy… The maven plugin worked fine, but had trouble understanding javascript libraries loaded during testing. Since my tests involved mocking out the service I was using (they were supposed to be unit tests, after all!) I could not manage to run them using the phantomjs-qunit-plugin.

It took me a few attempts to understand how the maven plugin dealt with making separate JavaScript files available to PhantomJs, but I finally managed to make it work.

If you are going to try anything from this post, make sure that you have a version of the phantomjs-qunit-runner that has my changes! As of writing, that means checking out trunk, and building it yourself.

From this point on, everything is easy!

We  start with a maven pom.xml that sets up the phantomjs runner:

    <build>
        <finalName>GoalServer</finalName>
        <plugins>
            <plugin>
                <groupId>net.kennychua</groupId>
                <artifactId>phantomjs-qunit-runner</artifactId>
                <version>1.0.12-SNAPSHOT</version>
                <configuration>
                    <jsSourceDirectory>src/main/javascript/</jsSourceDirectory>
                    <jsTestDirectory>src/test/javascript/</jsTestDirectory>
                    <ignoreFailures>false</ignoreFailures>
                    <phantomJsExec>/home/wouter/opt/phantomjs/bin/phantomjs</phantomJsExec>
                    <libraries>
                        <directory>src/test/javascript/lib/</directory>
                        <includes>
                            <include>**/*.js</include>
                        </includes>
                    </libraries>
                </configuration>
                <executions>
                    <execution>
                        <phase>test</phase>
                        <goals><goal>test</goal></goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

You can see that I’ve stuck to the standard maven directory structure, keeping my javascript in the src/main/javascript and its tests in src/test/javascript. You do need to specify where the phantomjs executable is installed. This is slightly unfortunate, and should in a real project be delegated to a configuration setting (in the maven settings.xml, probably). For an example, having it hard-coded is clearer.
The part of this that I added is the libraries tag, where you use the default maven fileset syntax to define all the libraries you want to have available when executing the tests. In my codebase, I put all the javascript libraries in src/test/javascript/lib, but an argument could be made to put these somewhere outside of your src dirs. The plugin doesn’t care, as the fileset is translated to fully qualified paths before handing things over to PhantomJS.

I must admit that my goals weren’t set very high for my first test. After all, this was to be my first javascript test! So it turned out like this:

test("Test Test", function() {
    console.log("Testing test");
    equal(1, 0, "equal!");
});

Very exciting! And indeed, it failed:

[ERROR] Failed to execute goal net.kennychua:phantomjs-qunit-runner:1.0.12-SNAPSHOT:test (default) on project GoalServer: One or more QUnit tests failed -> [Help 1]

Now if you look carefully, you might be able to fix that test yourself. I’ll leave it at that, because I was quick to move on to my next step, which involved calling a function in javascript code which was in another file, and not located in the test code directory.

test("Test true", function() {
   equal(1, GameScheduleClient.testing.isTrue(), "It s true");
});

This is calling the following mindbogglingly complex function in src/main/javascript/GameScheduleClient.js:

var GameScheduleClient = GameScheduleClient || {};

GameScheduleClient.testing = function() {
    return {
        isTrue : function() {
            return 1;
        }
    };
} ();

If this doesn’t work, take a look at the advice above, and ensure you have a version of the qunit-runner that includes the patches I did. Otherwise you’ll have to do what I did, and run around in circles for a day or so.

Next step is to be able to call a service, which we’ll mock with MockJax. I’m not going to explain all the details of how mockjax works, for that I suggest you read something from someone who actually understands this stuff. But as long as you’ve put the library in the right place, and use the right version of the maven plugin, the following code should work:

module("Mock Ajax", {
    setup: function () {
        $.mockjax({
            url:"/mockedservice",
            contentType:"text/json",
            responseText:[ { bla:"Test" }]
        });
    },
    teardown: function () {
        $.mockjaxClear();
    }
});
asyncTest("Test I get a mocked response from a service", function () {
    $.getJSON("/mockedservice", function (response) {
        ok(response, "There's no response!");
        equal(response.responseText.bla, "NotTest", "response was not Test");
        start();
    });
});

Note that there is no supporting javascript method that we’re actually testing here. The $.mockjax call sets up mockjax to respond to a (jquery) ajax call to the /mockedservice url with a response containing the string Test. The $.getJSON call is a regular jquery ajax call, and this test simply verifies that the response.

The test module has a separate setup and teardown, which are called for each test, as you’d expect in an xUnit type framework. The test must be an explicit asyncTest, that is started explicitly within that method.

And that, as they say, is all there is to it! All in all, qunit provides a simple interface for basic unit-testing. I’m now looking into Jasmine for a more elaborate set-up, and a little better integration with the maven build environment.