Setting up Selenium with Maven

In the team I’m currently working with, the need for regression testing was becoming obvious. The team is working on a web MVC-type framework, and has been running into the results of limited testing: quality wasn’t just low, it was unknown. The usual excuses were there, pressure to release, and interruptions by unplanned work. Usually, the unplanned work is caused by the same lack of quality that is probably caused by pressure to release.

Since this is a web-intensive project where the front-end is not just important, but actually a core part of the software, we decided the first order of business would be to get Selenium tests running on the test/showcase applications. I’ve worked with Selenium before, but not with the GWT front-end technology, and not with the Selenium’s WebDriver APIs. I thought I’d share the process, and some of the issues encountered. In this post, I’ll describe how to get selenium running from a local maven build while still being able to run the unit/micro-tests separately.

Integration into the build

The project is built using Maven, so the first step is to get both the selenium dependencies in there, and be able to run a simple Hello World test from the maven build. The Hello World test was simply an empty Selenium JUnit test that opened the test-showcase, the hello world applet within that, and checked the window title.

package com.qualogy.selenium;

import java.util.concurrent.TimeUnit;

import org.junit.After;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import static org.junit.Assert.*;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;

public class HelloWorldTest {

	WebDriver driver = new FirefoxDriver();
	// WebDriver driver = new HtmlUnitDriver(true);

	String baseUrl = "http://localhost:7070";

	@Before
	public void setUp() throws Exception {
		driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
	}

	@Test
	@Ignore
	public void testTestQafe() throws Exception {
		driver.get(baseUrl + "/test-showcaseapp/QAFEGWTWeb.jsp");
		
		// Open HelloWorld Applet
		driver.findElement(By.id("qafe_menu_applications|system_app")).click();
		driver.findElement(By.id("HelloWorld")).click();
		driver.findElement(By.id("window1|HelloWorld")).click();

		assertEquals("Hello World!", driver.getTitle());
	}

	@After
	public void tearDown() throws Exception {
		driver.quit();
	}
}

Getting this to compile requires you to add the selenium-java dependency into your maven pom.xml file:

	<dependencies>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
		</dependency>
		<dependency>
			<groupId>org.seleniumhq.selenium</groupId>
			<artifactId>selenium-java</artifactId>
			<version>2.4.0</version>
			<scope>test</scope>
		</dependency>
	</dependencies>

Note: I initially used the 2.3.0 version of selenium. That version does not work together with FireFox 6, though, and that broke my tests for a bit when I installed that update.

At this point, as long as the application under test is running outside the build, just doing a mvn clean install will run you selenium test, and you’ll see Firefox starting up, doing whatever you’ve defined in the test, and quitting again. Success!

Next, I would really like to get this running without depending on the external process of starting your application outside of your build. I won’t go into the troubles you can have if you’ve tailored your build process for deployment on some proprietary application server (check out the cargo plugin, as a starting point). But luckily, in this case, the system started without any problems by simply doing:

mvn jetty:run

This starts jetty with the default configuration, which mean port 8080, which I don’t want happening when we’ll be running this later on our build server, since that one uses port 8080 itself. More importantly, I don’t want to run these tests with every build. Integration tests using a deployed application are simply too slow to run with every build. Developers should be able to run them locally, but not when they’ve only added new unit-test (and any production code that came with those).

The way to deal with this is to use a maven profile where you can switch certain phases of the build on and off. Another way would be to create a separate maven (sub) project that is there only to run the integration test build, but that was not the direction I chose. So what does such a profile look like?


	<profiles>
		<!-- Profile to enable integration test execution Should be enabled given 
			a -Dintegration.test option when running the build -->
		<profile>
			<id>integration-test</id>
			<activation>
				<property>
					<name>integration.test</name>
				</property>
			</activation>
			<build>
				<plugins>
					<plugin>
						<groupId>org.mortbay.jetty</groupId>
						<artifactId>maven-jetty-plugin</artifactId>
						<version>${org.mortbay.jetty}</version>
						<configuration>										
							<webAppSourceDirectory>${project.build.directory}/${artifactId}-${version}</webAppSourceDirectory>									
							<webXml>${project.build.directory}/${artifactId}-${version}/WEB-INF/web.xml</webXml>
							<scanIntervalSeconds>10</scanIntervalSeconds>
							<stopKey>foo</stopKey>
							<stopPort>9999</stopPort>
							<connectors>
								<connector implementation="org.mortbay.jetty.nio.SelectChannelConnector">
									<port>7070</port>
								</connector>
							</connectors>
						</configuration>
						<executions>
							<execution>
								<id>start-jetty</id>
								<phase>pre-integration-test</phase>
								<goals>
									<goal>run</goal>
								</goals>
								<configuration>
									<scanIntervalSeconds>0</scanIntervalSeconds>
									<daemon>true</daemon>
								</configuration>
							</execution>
							<execution>
								<id>stop-jetty</id>
								<phase>post-integration-test</phase>
								<goals>
									<goal>stop</goal>
								</goals>
							</execution>
						</executions>
					</plugin> 
					<plugin>
						<groupId>org.apache.maven.plugins</groupId>
						<artifactId>maven-surefire-plugin</artifactId>
						<configuration>
							<!-- Skip the normal tests, we'll run the integration-tests only -->
							<skip>true</skip>
							<excludes>
							</excludes>
							<includes>
								<include>com/example/selenium/**</include>
							</includes>
						</configuration>
						<executions>
							<execution>
								<phase>integration-test</phase>
								<goals>
									<goal>test</goal>
								</goals>
								<configuration>
									<skip>false</skip>
									<includes>
										<include>com/example/selenium/**</include>
									</includes>
									<excludes>
									</excludes>
								</configuration>
							</execution>
						</executions>
					</plugin>
				</plugins>
			</build>
		</profile>
		<profile>
			<!-- Added a micro/unit test profile that is run by default
				 so that we can override the surefire excludes in an
				 integration test build -->
			<id>micro-test</id>
			<activation>
				<property>
					<name>!integration.test</name>
				</property>				
			</activation>
			<build>
				<plugins>
					<plugin>
						<groupId>org.apache.maven.plugins</groupId>
						<artifactId>maven-surefire-plugin</artifactId>
						<configuration>
							<excludes>
								<exclude>com/example/selenium/**</exclude>
							</excludes>
						</configuration>
					</plugin>
				</plugins>
			</build>
		</profile>
	</profiles>	

That’s quite a lot of XML, but in short the integrationt-test profile does the following:

  • Activate the integration-test profile when the -Dintegration.test property is set
  • Start jetty (jetty:run) before doing the integration-test phase
  • Stop jetty after the integration-test phase
  • Do NOT run any unit tests if the integration-test profile is enabled
  • DO run all the integration tests in the package com.example.selenium when the integration-test profile is enabled

Which is great, and works. But when we then run the normal build, the selenium tests are still run (and fail). So that’s where the micro-test profile in the snippet above comes in: it excludes the selenium tests from the normal surefire testrun.

So now if we do mvn clean install all our normal micro-tests are run, and if we do mvn -Dintegration.test clean install the micro-tests are skipped and the selenium tests are run.

Now we are ready to write a real test, but I’ll save that for a future post.

Adventures in Rework

I came across this post by Martin Fowler, on the Strangler Application pattern, and its accompanying paper. This brought back memories of some of my own adventures in rework, some fond, others not so much. In all cases, though, I think they were very valuable lessons on what to do and not to do when reworking existing systems. No reason not to share those lessons, especially as some of them were rather painful and expensive to learn. The paper linked above is a success story, mine are more the kind you tell your children about to keep them away from dysfunctional projects. This is not because these projects were done by horrible or incompetent people, or completely ineffective organisations. They weren’t. But sometimes a few bad decisions can stack up, and result into the kind of war stories developers share after a few beers.

I’ll wait here until you’re back from the fridge.

When talking about rework, note that I’m not calling them ‘legacy systems’, as you’ll see that in a few of these cases the ‘legacy’ system wasn’t even finished yet before the rework began.

‘Rework’ is simply a name for rebuilding existing functionality. It is distinct from Refactoring in that it is usually not changing existing code, but replacing it. Another distinction is one of scope. Refactoring is characterised by small steps of improvement, while Rework is about replacing a large part of (or an entire) system. Or in simpler terms:

Rework = bad, Refactoring = good

One problem is that very often people talk about refactoring when they mean rework, giving the good practice of refactoring a bad name. When a developer or architect says ‘We need to do some refactoring on the system, we think it will take 4 weeks of work for the team’, what they are talking about is rework. Not surprisingly, many managers now treat refactoring as a dirty word…

When do you do rework?

Rework is not always bad. There can be good reasons to invest in re-implementation. Usually, maintainability and extensibility are part of those reasons, at least on the technical side. This is the type of rework that is similar to refactoring, in that there is no functional change. This means that this is work that does not give any direct business value. From the point of view of the customer, or the company, this means that these kind of changes are ‘cost’ only.

Rework can also be triggered by changes in requirements. These might be functional requirements, where new functionality can’t easily be fitted in the current system. Usually, though, these are non-functionals. Such as: we need better scalability, but the platform we’re using doesn’t support that. Or: We need to provide the same functionality, but now as a desktop application instead of a web-application (or vice versa).

Rework is also sometimes triggered by policy, such as “we’re moving all our development over to…”. And then Java, Scala, Ruby, ‘The Cloud’, or whatever you’re feeling enthusiastic about at the moment. This is not a tremendously good reason, but can be a valid one if you see it in the context of, for example: “We’re moving all our development over to Java, since the current COBOL systems are getting to be difficult to maintain, simply because we’re running out of COBOL programmers.’

Adventure number one

This was not the first piece of rework I was involved with, but a good example of the importance of always continuing to deliver value, and keeping up trust between different parties in an organisation. No names, to protect the innocent. And even though I certainly have opinions on which choices were good and which were not, this is not about assigning blame. The whole thing is, as always, the result of the complete system in which it happens. The only way to avoid them is complete transparency, and the trust hopefully resulting of that.

A project I worked on was an authorisation service for digital content distribution. It could register access rights based on single-sale or subscription periods. This service in the end was completely reworked twice, with another go in the planning. Let’s see what happened, and what we can learn from that.

The service had originally been written in PHP, but was re-created from scratch in Java. I don’t know all the specifics on why this was done, but it involved at least the component of expected better performance, and there was also a company-wide goal of moving to Java for all server-side work. The non-functional requirements and policy from above.

This project was a completely redone system. Everything, including database structures, was created new from scratch. There was a big data-migration, and extensive testing to ensure that the customers wouldn’t suddenly find themselves with missing contents, or subscriptions cut short by years, months or minutes.

Don’t try to change everything at once

A change like that is very difficult to pull off. It’s even more difficult if the original system has a large amount of very specific business-logic in code to handle a myriad of special cases. Moreover, since the reasons for doing rework were completely internally directed, the business side of the company didn’t have much reason to be involved in the project, or understanding of the level of resources that were being expended in it. It did turn out, though, that many of the specific cases were unknown to the business. Quickly growing companies, and all that…

[sidebar: The Business]
I use the term ‘The Business’ in this post regularly. This is intentional. There is a valid argument, made often in the Agile community, that talking about ‘the business’ is an anti-pattern indicating overly separated responsibilities indicative of silo thinking. And I think that in a lot of cases this is true, though sometimes you just need a word…
In this case, there actually was a separation. There were some silos. And the use of the word is accurate.
And I couldn’t think of another term.
[/sidebar]

Anyway, the project was late. Very late. It was already about 9 months late when I first got involved with it. At that point, it was technically a sound system, and was being extensively tested. Too extensively, in a way. You see, the way the original system had so many special cases hard-coded was in direct conflict with the requirement for the new system to be consistent and data-driven. There was no way to make the new implementation data-driven and still get 100% the same results as the old one.

Now, this should not be a problem, as long as the business-impact is clear, and the business side of the organisation is closely enough involved to early-on make clear decisions on what are acceptable deviations from the old system and what are not. A large part of the delays were simply due to that discussion not taking place until very late in the process.

As with all software development, rework needs the customer closely involved

In the end, we did stop trying to work to 100% compliance, and got to sensible agreements about edge-cases. Most of these cases were simply that a certain subset of customers would have a subscription end a few days or weeks later, with negligible business impact. They still caused big delays in the project delivery!

What problems to fix is a business decision

Unfortunately, though the system went live eventually, this was with a year’s delay. It was also too late. On the sales and marketing side, a need had grown to not only register subscriptions for a certain time-period, but also to be able to bill them periodically (monthly payments, for instance). Because the old one hadn’t been able to do this, neither could the new one. And because the new system had been designed to work very similar to the old one, this was not an very straightforward functionality to add.

If you take a long time to make a copy of an existing system, by the time you’re done they’ll want a completely different system

Of course, it was also not a completely impossible thing to add, but we estimated at the time that it would take about three months of work. And that would be *after* the first release of the system, which hadn’t taken place yet. That would bring us to somewhere around October of that year, while the business realities dictated that having this type of new product would have the most impact if released to the public early September.

So what happens to the trust between the development team and the customer by a late release of something that doesn’t give any new functionality to the customer? And if the customer, after not getting any new functionality for a full year, then has a need and hears that he’ll have to wait another 6 months before he can get it? He tells the development team: “You know what? I’ll get someone else to do it!”

Frustrate your customer to your peril!

So the marketing department gets involved in development project management. And they put out a tender to get some offers from different parties. And they pick the cheapest option. And it’s going to be implemented by an external party. With added outsourcing to India! Such a complex project set-up that it *must* work. Meanwhile, the internal development organisation is still trying to complete the original project, and is keeping off getting involved into this follow-up thing.

Falling out within the organisation means falling over of the organisation

Now this new team is working on this new service which is going to be about authorisation and subscriptions. They talk to the business, and start designing a system based on that (this was an old-school waterfall project). Their requirement focus a lot on billing, of course, since that is the major new functionality relative to the existing situation. But they also need to have something to bill, and that means that the system also supports subscriptions without a hard end-date, which are renewed with every new payment. The existing system doesn’t support that, which is a large part of that three months estimation we were talking about.

Now a discussion starts. Some are inclined to add this functionality to the old system, and make the new project about billing and payment tracking. But that would mean waiting for changes in the existing system. So others are pushing to make the new system track the subscription periods. But then we’d have two separate systems, and we’d need to check in both to see if someone is allowed access to a specific product. Worse, since you’d have to be able to switch from pre-paid (existing) to scheduled payments, there would be logic overlapping those two.

Architecture is not politics. Quick! Someone tell Conway!

All valid discussions on the architecture of this change. Somehow, there was an intermediate stage where both existing and new system would keep track of everything, and all data would magically be kept in sync between those two systems, even though they had wildly different domain models about subscriptions. That would have made maintenance… difficult. So the decision was to move everything into the new system, and have the old system only there as a stable interface toward the outside world (ie. a façade talking to the new system through web-services, instead of to its own database).

So here’s a nice example of where *technically* there isn’t much need for rework. There are changes needed, but those are fairly easy to incorporate into an existing architecture. We’re in a hurry, but the company won’t fall over if we are late (even if we do have to delay introducing new payment methods and product types). But the eroded trust levels within the company made the preference to start from scratch, instead of continuing from a working state.

Trust is everything

Now for the observant among you: Yes, some discussion was had about how we had just proven that such a rework project was very complex, and last time took a lot of time to get right. But the estimates of the the external party indicated that the project was feasible. One of the reasons they thought this was that they’d talked mostly to the sales side of the organisation. This is fine, but since they didn’t talk much to the development side, they really had no way of knowing about the existing product base, and its complications and special cases. Rework *should* be easier, but only if you are in a position to learn from the initial work!

If you do rework, try to do it with the involvement of the people who did the original work

It won´t come as a big surprise that this project did not deliver by early September as was originally intended. In fact, it hadn´t delivered by September of the following year. In that time the original external party had been extended and/or replaced (I never quite got that clear) by a whole boatload of other outsourcing companies and consultants. The cost of the project skyrocketed. Data migration was, of course, again very painful (but this time the edge-case decisions were made much earlier!)

A whole new section of problems came from from a poorly understood domain, and no access during development to all of the (internal) clients that were using the service in different ways. This meant that when integration testing started, a number of very low-level decisions on the domain of the new application had to be reconsidered. Some of those were changed, others resulted in work-arounds in all the different clients, since the issues were making a late project later.

Testing should be the *first* thing you start working on in any rework project. Or any project at all.

Meanwhile, my team was still working on the existing (now happily released) system, both maintenance, new features, and the new version that ran against the new system´s web-services. And they were getting worried. Because they could see an army of consultants packing-up and leaving them with the job of keeping the new system running. And when it became clear that the intention was to do a big-bang release, without any way to do a roll-back, we did intervene. One of the developers created a solution to pass all incoming requests to both the old and the new systems, and do extensive logging on the results, including some automated comparisons. Very neat, as it allowed us to keep both systems in sync for a period of time, and see if we ran into any problems.

Always make a roll-back as painless as possible

This made it possible to have the new system in shadow-mode for a while, fix any remaining issues (which meant doing the data-migration another couple of times), and then do the switch painlessly by changing a config setting.

Make roll-back unnecessary by using shadow-running and roll-forward

So in the end we had a successful release. In fact, this whole project was considered by the company to be a great success. In the sense of any landing you can walk away from, this is of course true. For me, it was a valuable lesson, teaching among other things:

  • Haste makes waste (also known as limit WIP)
  • Don´t expect an external supplier to understand your domain, if you don´t really understand it yourself
  • Testing is the difference between a successful project and a failed one
  • When replacing existing code, test against live data
  • Trust is everything

I hope this description was somewhat useful, or at least entertaining in a schadenfreude kind-of way, for someone. It is always preferably to learn from someone else’s mistakes, if you can… I do have other stories of rework, which I´ll make a point of sharing in the future, if anyone is interested.

Reading up: 5 Books To Read If You Want To Really Understand Agile

Last year, I posted an overview of some books every programmer should read. Those still stand, and I find more and more examples where I would like to re-iterate the advice to read those books.

This post is about other books, though. Books related to Agile and Lean principles and practices. There are many books on those subjects, and quite a number of those have become standard-works. I think you’d be hard-pressed to find an Agilist that has not read Agile Estimation and Planning, User Stories Applied, Agile Project Management with Scrum, etc. If you’re lucky, they’ll even know Agile Software Development, Principles, Patterns and Practices.

So those are on all the lists anyway, and thus not very interesting for me to write about. So I’ll do it a little differently. This is a list of a few books that, at different moments, have really helped me take my understanding of Agile to new levels.

Let’s start with the basics. I spent a little time on an XP team at the start of my career, but I didn’t really appreciate the significance of that at the time. After all, I didn’t really have anything to compare it to! So it took me a while to get back on that track. Time mostly spent wondering why there was so little testing going on in the teams I found myself on, and trying to fix that. At one point, though, the company I was working in was changing over to using Scrum, and I was first in line to go get trained, and get going with it.

Scrum and XP From The Trenches

Scrum and XP From The Trenches

Scrum and XP from the Trenches – Henrik Kniberg

Before the training, I was of course reading up on the material, finding a lot of familiar concepts, and being very happy with what I was finding. The most useful text I encountered was Henrik Kniberg‘s ‘Scrum and XP from the Trenches‘. Useful, because it is short, free (to download, though you can buy a hard copy, nowadays), incredibly practical and emphasizes the combination of Scrum and XP. I’m very much convinced that that combination is crucial to get to any kind of success with Scrum.

So if you’re new to Scrum (or even if you’re not, but somehow missed this book), go through this book to get a quick but very thourough overview of what to do. And what not. I wouldn’t do everything exactly as Henrik describes in the book, but then, neither does he, anymore! Scrum is about learning, and adapting, so it would be surprising (and worrying!) if the people who work with it don’t learn how to do it better as time goes by.

Scaling Lean & Agile development – Craig Larman and Bas Vodde

Fast-forwarding some time, and skipping a number of books, we come to a time where we were discussing the options of organising the development efforts around a large-ish project, that would occupy about half of our approx. 120 persons large department. Coincidentally (or perhaps not) I and another Scrum Master had both been reading the sample chapter on Feature Teams from Larman and Vodde‘s book ‘Scaling Lean & Agile Development‘. The subject matter of that chapter fit perfectly with our new plans, and we did (eventually…) end-up with a feature team set-up as described in the book. The clear way in which everything was explained, and the immediate practical value of the advice (Try… / Avoid…) made this immediately valuable (so go download and read that sample chapter, already!)

Scaling Lean and Agile Development - Book cover

Scaling Lean and Agile Development - Book cover

Of course, finding something good like that meant buying the book as well. One of the rare non-fiction books that I couldn’t put down, so I read it in one weekend. For me, even though I bought it for use in an Agile environment, this book was an eye-opening introduction to the concepts of Lean and of Systems Thinking. The book starts of with a 5 chapter section on ‘Thinking Tools’, including Systems Thinking, Lean Thinking and Queueing Theory. These concepts are explained very clearly, and if they’re new for you, this will make quite a lot of the issues you see around you at work and in organisations click into place. The ‘Organizational Tools’ section then goes into more practical detail on how to arrange your organisation (Teams, Feature Teams) and your work (Requirements Areas) in such a way that you can do Large-Scale Scrum.

The authors’ idea of large-scale is 100-500 persons working on a single product. This is larger than the situations I’ve worked with (which was roughly 60 persons max., as I said above), but not only does the advice (and thinking tools!) in this book work just as well for just two scrum teams, it will also certainly help your work with single teams, or with multiple teams in an organisation where each team works on a different product.

A companion book has been released (Practices for Scaling Lean & Agile Development) which, though less horizon-expanding for me, gives more concrete practical advice based on the idea in the first book.

Leading Lean Software Development – Mary and Tom Poppendieck

Now, as you can imagine, my appetite was whet on the subject of Lean. Now Lean is a subject that came out of studying the ways of working at Toyota, which in turn was inspired by the work of W. Edward Deming. So if you’re looking, there’s a lot to read on this subject. The people who’ve done most in bringing the concepts of Lean out of the production context of Toyota (and the general management context of Deming) are Mary and Tom Poppendieck. They’ve written three books dealing with Lean in software development, but the first one that I read was the last one released: ‘Leading Lean Software Development

Leading Lean Software Development - Book Cover

Leading Lean Software Development - Book Cover

This book deals with the different aspects of managing software development, and is valid whether you want to call your way of working Lean or Agile. All the advice in here is great, but the way in which is given is even better! The whole book is set-up in such a way that each subject discussed is from the point of view of the roles that are impacted by the new way of working.

This means that the chapters on Reliable Delivery are framed from the point of view of a Project Manager, with both examples and language that fits for that role. The chapters focused on Technical Excellence take into account the views and experiences found in Software Developers and Architects, breaking down the ways working all the way back to Edsger Dijkstra, and then building it up again explaining how modern ways of working such as TDD are the closest we’ve yet come to some of the ideas of early computer science.

Because of that focus on explaining things from different points of view, the message of the book is much stronger than it would otherwise be. Also, this gives you the necessary tools to discuss the sometimes radically different views with people in various roles. Useful is you are going to be in a coaching role, as I was just switching to at the time I read this.

Next to Reliable Delivery  (planning and flow) and Technical Excellence (how to build things) ,the book also has chapters covering Systems Thinking (seeing, and optimising)  the whole, Relentless Improvement, Great People (finding, growing and keeping them) and Aligned Leaders (for a common goal, and in moving to lean/agile).

Like the Larman and Vodde book, for me each chapter was instant recognition of the problems described, and gave continuous affirmment of a way of working that I find instinctively correct.

Kanban – David J. Anderson

Kanban is an Agile system for managing work in the same way Scrum is. Kanban is more directly related to Lean than is the case for Scrum, and is considered to be the Lean software development method. In the context of these books, Kanban is a concrete implementation of the ideas of Lean encountered in the Larman/Vodde and Poppendieck books. That is also how the book reads: very concrete, with step-by-step guides to implementation of Kanban. It also gives plenty of background on why the recommended works, of course, but this is secondary to the practical side. The author, David J. Anderson, is the inventor of the Kanban Method, and this is the book that defines it.

Kanban - Book Cover

Kanban - Book Cover

Since this article’s title mentions Agile, it may be surprising that so much of the material in this list is about Lean ideas. I’ve written before on this blog that I thing the similarities between Agile and Lean are much more important than the differences. Quite a few of the ideas are the same. Certainly some of the tools are the same (for instance, a Scrum board is nothing more than a simple kanban board). Reading how those tools are used in other contexts, and why they work, can only improve your skills in using them in any context. Also, it’s important to avoid becoming a one-trick-pony. There are many situations where (ie.) a Kanban system will be much more appropriate than using Scrum would be. One example is the simple Kanban I helped set-up for our recruitment team.

Back to the book. As I said, this book stays very close to the day-to-day reality of improving software development processes. The subtitle is ‘Successful Evolutionary Change for Your Technology Business’, and the book describes why you should use the Kanban approach to guide a change process. The approach is much more incremental than a Scrum implementation usually is, which in some situations can mean the difference between failure and success. Of course, the book also goes into a lot of details on the mechanics of the parts of Kanban, how to use them in various situations, etc.  Not surprising for the book that defines the process, by its inventor.

Management 3.0 – Jurgen Appelo

Management 3.0 - Book Cover

Management 3.0 - Book Cover

This particular book I didn’t want to like (probably because of the title), but a friend ‘lent’ me a pdf copy, and after reading the first few chapters I had to cave in and order the book.

I haven’t actually finished it yet (this was last week), but am very much enjoying what I’ve read so far. The contents as well as a nicely irreverent style of humour make this a great read.

The reason the contents is so good, is that this book approaches management from the point of view of complex systems theory. Complex Adaptive Systems are mentioned often enough as a contributor to the way Scrum is set-up, but very rarely does anyone go into much detail on the how and why of that. Even rarer is this books look at how those concepts can and should be used for management. Not Project Management per se (the Poppendieck book above does a nice job of that as well), but also people management and teams.

If you’d like a quick introduction on why complex systems theory is useful for management of work, take a quick look at this video by Dave Snowden on how to organise a children’s party:

 

 

 

 

http://www.youtube.com/watch?v=Miwb92eZaJg&rel=0

Snowden, and his company Cognitive-Edge are doing interesting work in this area, and I look forward to hearing him speak at the Lean and Kanban conference in Antwerp in October.

The book uses the grounding in complexity to discuss all the subjects that a manager should take care of (but often doesn’t, or not very effectively). Subjects covered include How To Energise People, The Basics of Self-Organisation, How To Empower Teams, How To Develop Competence and How To Grow Structure. And quite a few more, but I haven’t completely read it yet. What I have read is both directly applicable, but more importantly has given me new tools to think about my work.

That’s it for this post. It can be hard with so many great books out there to pick just a few that you should read. I’ve picked a few that gave me new thinking tools over the past five years, and presented them in the chronological order in which I read them. What books should really be in this list according to you? Or do you think other books cover these kind of subjects in an even better way? Let me know!

Performance Reviews

One of the things that makes my commute to work enjoyable (next to the BBC Friday night comedy hour) is the LeanBlog podcast. In it Mark Graban talks to different people about Lean. The people he talks to have varying backgrounds, and the general quality of the podcast is very high.

One of the podcasts I listened to a while back was with Samuel Culbert, who was talking about performance reviews. I was reminded of that episode tonight while talking with some ex-colleagues about the way performance reviews were done in our old company. We weren’t very enthousiastic.

Now you might think that that is not too strange. People are very rarely enthusiastic about their performance reviews. The way most appraisal systems are set-up is that only a small percentage of people if allowed to receive a very high score. The idea is that there are a very few high-performers that should get an extra pay raise or bonus (or both), but that their occurrence should be very rare. The result is that for most people, performance review meetings contain lots of unpleasant surprises.

Why? Well, one reason is that managers are only allowed a quotum of high performers. This certainly was the case in the company I was talking about with my ex-colleagues, and going against that got me into some trouble at the time. So the team that worked fantastically together, and managed to bring a very difficult project to a successful conclusion against the odds saving the company millions. That team only gets one person to have the highest level pay-raise. The others, who worked just as hard, and were just as indispensable for the results, get a disappointment. Not that they don’t get a raise, but for those people the manager needs to emphasis the things that they did wrong. To be able to defend them not getting the top score, despite their success.

Lucky for the manager/team-lead there is always something wrong to be found! Lucky for the company, they hire completely random, so their bell-curve of performance always works out… Actually, Culbert mentions that these distributions usually mean that about 70% of the people should be marked as average. That is going to be difficult, isn’t it?

But the appraisal is necessary, right? To ensure people stay motivated. To ensure they’re focused on improving. To set explicit targets, both for their work (because you can’t expect them to understand what’s important for their day-to-day work), and for their personal development (because… sorry can’t seem to think of a reason).

Mr. Culbert makes some good points in the podcast, calling performance reviews “corporate theatre,” as well as a “sham,” a “facade,” “immoral” and “intimidating”. Unsurprisingly, since he’s written a book called “Get rid of the performance review“.

His latest book is apparently called “Beyond Bullshit“, which sounds like another one I should pick-up.

My own views on this practice is aligned with Culberts: at best, a performance review doesn’t add any value; mostly, it is harmful for people and company.

If the feedback you as a manager give during a review is in any way new to the person you’re reviewing, then you’ve failed as a manager to do your job (which should have been helping that person to do their work as best as possible, not collecting information on how they do that work for later use). If not, why have the meeting.

If the review is about verifying whether targets have been met, or to set new targets, you really should read Dan Pink’s Drive about all the ways the use of targets is hurting motivation, and thus performance, of your people.

Agile is Rock ‘n’ Roll

[EDIT: Thanks to Hubert Iwaniuk, there’s now a playlist to accompany your reading of this post!]

[EDIT: There’s now also an XP version of this: XP Is Classic Rock]

I’ve had a number of occasions where people, usually working in a very strict, waterfall, environment, have voiced the opinion that ‘all that agile stuff’ is just an excuse to go ‘back to’ cowboy programming and rock ‘n’ roll development.

My normal response to this is something along the lines of ‘Au contraire! Working agile means you need to be more disciplined, not less!’ Which is true, of course, but not always quite in the same sense that they’re talking about.

Recently, though, it has occurred to me that in some ways, Agile really is Rock ‘n’ Roll! Let’s take a look at some examples:

Estimation and Planning

To quote the well respected experts on backlog prioritisation and de-scoping, Jagger and Richards: “You can’t always get what you want, but if you try, sometimes, you get what you need.”

And indeed this is the basis of Agile (release) planning: We take into account what the customer wants at the start of the project (and place that on our backlog), but make it clear that we do not know for sure yet what will be delivered eventually. We do go back to the customer frequently to determine if he’s happy with what he has so far, and whether he actually needs anything more (or something different). This gets the customer what he needs, or as much of it as possible within the time frame.

Team work

Whether you’re more partial to the original Beatles version, or the Aerosmith cover, the message to ‘Come Together’ is fairly core to our agile values. And from the rest of the lyrics it seems fairly possible that they are talking about old-style hackers and agile coaches:-)

Time Boxing

One tool we’ve certainly embraced in the Agile community is time boxing. And this really goes back to one of the originals of Rock ‘n’ Roll: Rock Around The Clock. As the song says: “We’ll have some fun when the clock strikes one”, and keep going, having a good time and “When it’s eight, nine, ten, eleven too, I’ll be goin’ strong and so will you.”. But we are strict in an almost Cinderella like fashion, and “When the clock strikes twelve, we’ll cool off then”, before we start another time-box.

Ch… Ch… Changes

I still don’t know what I was waiting for
And my time was running wild
A million dead-end streets
Every time I thought I’d got it made
It seemed the taste was not so sweet

If there was ever a song that was completely and obviously inspired by the need for Agile development, this is it. Mr. Bowie is even considerate enough to include the importance of testing: “how the others must see the faker, I’m much too fast to take that test”.

As you can see in the quoted lyrics above, we’re dealing with a man tired of waiting long periods of time for something which isn’t quite what he expected. Again, and again, this happens. A sad story, but all too familiar.

Transparency

We have to remember that the reasons for choosing an Agile way of working is not always one of love at first sight. The choice is often the last one after many disappointing previous liaisons. As Freddie Mercury sings in the last of our Agile Playlist, “I want to break free from your lies, you’re so self-satisfied I don’t need you”, right before he breaks the chains of his waterfall process.

But this same song also contains a warning: even if we deliver all we promise with new ways of working, the temptation to go back to the familiar ways of earlier days remains. And people still may ‘walk out the door’, because they ‘have to be sure’. Unfortunate, but unavoidable.

Take me to the other side

Not all songs are about happy things, though. It would of course be possible to note that Waterfall has some very nice tunes of its own. People would note the micro-management displayed in Every Breath You Take, and Walk This Way. They’ll point to the wishful planning complained about in Won’t Get Fooled Again. And Billy Joel’s Allentown, though written for a different industry, also deserves mention in this list. They might even mention Stairway to Heaven, which symbolically represents the Waterfall process’ stairway like structure, and lyrically describes the results (“When she gets there she knows, if the stores are all closed with a word she can get what she came for”, and “There’s a feeling I get when I look to the west, And my spirit is crying for leaving.”)

Are we sure heaven's this way?

Are we sure heaven’s this way?

For those people, it might be good to remember that there must be 50 ways to leave your waterfall project.

The problem is all inside your head
She said to me
The answer is easy if you
Take it logically
I'd like to help you in your struggle
To be free
There must be fifty ways
To leave your waterfall project

She said it's really not my habit
To intrude
Furthermore, I hope my meaning
Won't be lost or misconstrued
But I'll repeat myself
At the risk of being crude
There must be fifty ways
To leave your waterfall project
Fifty ways to leave your waterfall project

[CHORUS:]
You just interact, Jack
don't make a new plan, Stan
Continuously deploy, Roy
And you test regularly
Create a little trust, Gus
You do need to discuss, much!
Work transparently, Lee
And get yourself free

5 ways to make sure your sprint velocity is a useless number

Velocity always seemed a nice and straightforward concept to me. You measure how much you get done in a certain period of time, and use that to project how much you’ll probably get done in the same amount of time in the future. Simple to measure, enables empirical planning, simple to use in projections and planning. Measuring influences the work, though.

The concept of velocity is almost always used, even within companies that are still new to an Agile way of working. But simple though it seems, there are many ways velocity can lose its usefulness. I happen to think velocity is one of the better metrics, but if you’re not measuring it correctly, or misinterpreting the resulting numbers, it can become a hurdle to good planning.

Let’s have a look at some of the ways velocity doesn’t work, and how to avoid them.

Not a Number

First of all, velocity is not just a number. It’s always a range, or an average with error margins. Why is this important? Because if you do your planning based on a single number, without taking into account the normal variation in productivity that is always there, you can be sure your planning is not giving you a realistic idea of what will be done when.

In other words: realise that your planning is an estimation of when you think a certain set of work can be done. An estimation should always include uncertainty. That uncertainty is, at least partially, made explicit by taking the variance of your velocity into account.

Velocity charted with a confidence level around it

Velocity charted with a confidence level around it

The simplest way to get pessimistic and optimistic values for velocity is to simply take the average of the three lowest and the three highest of the last ten sprints. Another way is to use a mathematical confidence level calculation. I don’t actually think there’s much difference between the two. Charting velocity in this way can get you graphs such as the one shown above.

Then, of course, you have to actually use this in your release planning.

Release forecast using variation in velocity

Release forecast using variation in velocity

Not an average

I know I just talked about it not being an average, but this is different. Another way in which the averaging of points finished in a sprint can cause problems, is if it doesn’t actually mean ‘points finished in a sprint’. Quite often, I’ve met teams that have a lot of trouble finishing stories within their sprints. The causes of that can be many, with stories simply being too large on top. Sometimes these teams have correctly realised that if they’ve only finished part of a story, they don’t get partial ‘credit’ for this in the Sprint’s velocity. But then they do take credit of the full number of story-points for the entire story in the subsequent sprint, once they’ve actually finished the story.

Average?

Average?

So here we can see what happens then. The average is around 20. So should this team plan 20 story-point worth of work into their next sprint? Probably not a good idea, right? If the variation in velocity is very high, there is usually a problem.

What one could do in this instance is re-estimate any unfinished stories so that only the work actually done in the later sprint for those stories is calculated for those sprints. Yes, you’ll ‘lose’ some points that you estimated are don’t seem to count anywhere as work done. But you’ll immediately get a more realistic figure for your velocity, and an immediate reason to make those stories smaller, as they simply won’t fit in a sprint if the velocity is realistic.

For release planning, you’ll not be depending on a weird fluctuation of velocity any more, but on a more dependable figure with less variation.

Variable Sprint Length

If you change the length of your sprints around, velocity will not be very useful. But, I can hear you say, we can just calculate the expected velocity for a 2 week sprint by taking two-thirds of the velocity of a 3 week sprint! That would be nice, but unfortunately it doesn’t work like that. The regular rhythm of sprints creates certain expectations within the team. The team learns how much it can take in, in such a period. Also, the strict time-box of an agreed sprint length is very useful in bringing existing limitations into view.

Bring problems to the surface

Bring problems to the surface

The famous ‘lowering the waters brings the rocks to the surface‘ picture of lean waste elimination is a useful way to view this.

Estimating In Time

If someone asks me how long I’m going to take to do a particular piece of work, I’ll normally answer saying it will take a certain amount of time. This is quite natural, and answers fairly directly the question posed. When someone asks me when I can have such a particular piece of work ready, again I could answer by giving a specific date and time.

If someone asks me how much work I can do in a work week, though, I might be tempted to answer: “40 hours”. And I would probably be right! And if I would then, at the end of that week, look back, and see how much time did I actually work, it would probably not be too far off those 40 hours. But I wouldn’t learn much from that observation.

By using the concept of ‘Story Points’, an abstract measure for estimation, we can still estimate the effort for a certain piece of work. And if we then give other pieces of work an estimation in Story Points, relative to the story we already estimated, we have created a new measurement system! So for instance if ‘Allowing a user to log-in’ is 3 Story Points, then ‘sending a user a password reminder’ could be 5, if it’s about  (but not quite) twice as big.

Of course, in the end you will want to relate those abstract Story Points back to time, since you will often want to determine when you can release a bit of software. But you don’t estimate that, you measure that: It turns out in on sprint, we can do about 12 Story Points, give or take a few. So if that’s the case, we will be able to release functionality X by at the latest, date Y (see the release planning graph earlier).

Some people do the same type of trick by using ‘ideal days’ to estimate, and determining the ‘focus factor’, or percentage they were actually managing to get done. Mathematically this works OK, but it’s very hard for people to let go of their feeling of ‘when it will be done’, and estimate is ‘real’ ideal days.

Including bug-fix time in your velocity

I’ve noticed that this one can be a bit controversial, but it’s an important factor in the usefulness of  your velocity figure.

As a team, you will encounter work that is not part of creating new functionality from your product owners wishlist. Often, this work presents itself in the form of fixing defects found in your software. Most of the time, those defects exist in the software because in an earlier sprint some new functionality was added.

Now it can be that such a defect is discovered, and needs to be fixed right away, because it truly interferes with a customer’s use of your system. Those types of defect are usually not estimated, but certainly not taken into account when calculating your velocity for a certain sprint.

Other defects are less critical, and will/should be planned (prioritised by your Product Owner) to be taken into a sprint. Those types of defects sometimes are estimated, but still should not be taken into account when calculating your velocity!

Why not? Well, if you see the goal of your team as delivering new software for the Product Owner, then a defect is simply a way in which some work delivered was not completely done. Usually not done in the form of not sufficiently tested. Fixing such a defect is of course very important. But it is slowing you down from the primary goal of delivering new functionality! But adding the points for fixing the defect to your velocity would make it seem that you are not going any slower (maybe even faster!). So it would give a false impression of the speed in which you’re getting the work the Product Owner wants, done, and might skew release planning because of that.

Also, it would means that your improvements in quality, which you’ve been working so hard on, will not be visible in your velocity. Now, is that right?

Avoid not trying

While preparing an introductory workshop on Scrum, we wanted to end our sections of presentation/retrospective with some general tips on the area discussed that would give a team that is starting out with scrum some help on things to try. And things better not to try.

I mean, Inspect and Adapt, yes, but it won’t hurt to avoid some common pitfalls.

Here’s the things we came up with, please let me know (below or on twitter) which ones you don’t agree with, and what important ones we missed!

User Stories

Try: Making stories small enough to be DONE within three days
Smaller also means easier to estimate, and easier to test. One of the most common things I find is Really Big User Stories. That makes everything hard.
Avoid: Working on less important stories before finishing more important ones
(De-)Prioritise ruthlessly before taking things into a sprint. During the sprint, don’t work on lower priority issues before the higher priority ones are done.
Try: Splitting stories vertically
If every story has a user facing component, (de-)prioritising parts of functionality becomes possibly. The earlier the user/customer can see the functionality, the sooner you can get feedback.
Avoid: Splitting stories by component
Delays getting feedback. Encourages work not directly related to functionality.
Try: Making stories specific by defining acceptance criteria for each one
You’ll know better what to do, how to estimate, how to test. And when you’ll be done.
Avoid: Making stories too detailed too early
You’ll add detail to stories in the course of the project, but doing it too early can mean 

  • working on something that’s not going to be used (in a while),
  • doing work that will need re-doing (once the customer sees the initial work, he will change his mind),
  • skewing your estimates: too much detail can inflate estimates beyond any realistic values.

Planning

Try: Estimating your complete release backlog with the full team
The whole team will gain understanding of what is expected. You’ll get better estimates. You can use a release burndown!
Of course, there are things that can help with this such as, ahum, having a clear vision, but you need to start somewhere.
Avoid: Not updating your estimates as you learn more
Estimates are estimates based on current understanding. If understanding doesn’t evolve during work, something is wrong. So estimates should also evolve. As you refine and split user stories, re-estimate them to evolve your planning along with your requirements.
Try: Fixed sprint length (of two weeks)

Fixed, for predictability, letting the team find a rhythm, ensuring problems (waste!) get raised. Two weeks, because one week is initially difficult for a team to do (but if you think you can, please try it!).
Avoid: Telling the team how much to take into sprint
You can’t expect a team to take responsibility for delivering if they don’t have control.
Try: Many (min. 6 – 10) small stories in a sprint
Failure to deliver the last story is much worse if it’s the only one. Or one of two. Smaller also means easier to estimate, and easier to test. It’s much easier to determine progress if you’re talking about ‘done’ stories, instead of percentages. (that was sarcasm, probably.)
Avoid: Stories that span multiple sprints
Just… don’t.
Try: Counting unplanned issues picked up in a sprint
If you get a lot of unplanned issues, you need to take that into account in your sprint planning. Count to get an idea of how much time you need to reserve for this!
Avoid: Picking up all unplanned issues raised during a sprint

    The PO should de-prioritise anything that is not a crucial customer problem, and then put them on the backlog to be planned in later sprints.
    Try: Reserving a fixed amount of time (buffer) per sprint for unplanned issues
    Measure how much time you’re spending on unplanned issues. Reserve that time for them (so your planned velocity goes down), and work on Structural fixes so this time reservation can go down in the future (after you measure you don’t need all of it).
    Avoid: Extending buffer for unplanned issues
    Because the buffer is there for a reason. To make sure that the rest of your time can be spent on what you’ve taken into the sprint. One way to deal with the buffer thing (to avoid getting tangled in time percentage calculations) is to have a rotating role in the team that deals with issues that come up. Call him Mr. Wolf, if you like, because it usually isn’t the most coveted role to play. That’s why you rotate…

    Scrum Master

    Try: Highly visible display of sprint & release burndowns in the team area
    Highly visible progress helps keeping focus. Whole team can see (and can feel responsible for) progress. And mostly, this is a great way to discuss any upcoming new issues with whoever is raising them: “Yes, I can see that this is important to you. Let’s look at what we’re working on right now, and what we need to delay to get that in…”
    Avoid: Only updating a computerised issue tracker when completing tasks or stories
    A physical task board provides continuous visibility and feedback. Seeing people moving things on a physical task board during the day simply encourages getting things done. Putting a post-it on a wall simply feels more real than putting a new issue into JIRA. There are so many ways in which the visible and physical are wired into our system, that there really is no way to replace that with a computerized tool.
    Try: Taking turns during stand-up by passing a token
    Sometimes stand-ups can devolve into a rote, going round, reporting status form. Break this by passing/throwing a token from one speaker to the next, in a self chosen order. This keeps things lively, avoids anyone dominating the stand-up, and makes people pay attention (or drop the ball:-).
    Avoid: Reporting to anyone but the Team during stand-up
    At all times avoid the stand-up becoming a ‘reporting to a project manager’ thing!
    Try: Having a retrospective at the end of every sprint
    The whole idea of Scrum is to continuously improve. You can’t do that if you don’t discuss how things went.
    Avoid: Not executing improvement experiments generated in the retrospectives
    Don’t just agree you need to improve. Do Something Already! At the end of the retro, agree which points you’re picking up, and ensure they’re taken care of in the next sprint. Also, with your action, try to indicate what the expected result of the action will be. Deciding whether your test was a success will be so much easier. Look into A3 problem solving when dealing with bigger issues. Or even with smaller ones.
    Try: Highly visible display of top 3 impediments
    And cross them off one by one as soon as they’re done…
    Avoid: Stories that span multiple sprints
    Yes. A bit obvious, perhaps, but this is happening often enough that I thought it worth mentioning.
    Try: Having an impediment backlog for the team and one for management
    Yes, impediments that managements should fix, should be just as visible (maybe even more so!)
    Avoid: Having a very long impediment backlog from which no items are ever picked up
    Agree what to pick up, don’t pick up too much at once (start with one at a time!), and FINISH them.

    Team

    Try: Making tasks small (< ½ day)
    Seeing people moving things around on a task board multiple times a day encourages getting things done. Smaller tasks are easier to understand, less chance of different understanding. Much easier to hand-over tasks, work together on stories.
    Avoid: Not moving any tasks (on the planning board) during the day
    Seeing people moving things around on a task board multiple times a day encourages getting things done. Lack of progress should be spotted as soon as possible, and help given.
    Try: Agreeing on a definition of done
    You should all agree on what ‘Done’ currently means. Once you can stick to that definition, you can start working on improving it.
    Avoid: An aspirational definition of done
    Did I emphasise ‘currently‘ enough? You need to know where you are, and that should give you a starting point…
    Try: Writing automated tests for any production issues
    This helps understanding and replicating the issue. And it ensures the issue will not come back. Having the tests documents understanding of code and functionality that was missed earlier.
    Avoid: Programming errors found after the sprint has ended
    A User Acceptance Test can find functionality the user didn’t expect (understanding). A UAT should never find expected functionality that does not work (quality).
    Try: Always doing a root cause analysis for any unplanned work
    Production problems are not normal! Find out why it happened, and see how you can change your process to avoid that type of problem in the future. Note: that means agreeing ‘Let’s not make this mistake in the future’ is not sufficient…
    Avoid: Not doing structural fix after root cause analysis
    The change should be structural, in your process. For instance:
    • ‘It was a simple programming error’, should result in changing you Definition of Done to require a higher code coverage for new code.
    • ‘There was a mistake during the deployment’ should result into ‘Let’s automate deployment’.
    • ‘We did two incompatible changes’ should result in ways to increase communication in the team, and better automated regression testing.

    My First Coding Dojo

    Last week Wednesday, I organised my first Code Dojo! For those that are not familiar with the concept, a Code Dojo is when programmers get together to exercise their craft by solving a problem together. The problem is called a ‘Kata’, analogous to the way these concepts are used in the Karate world.

    As a problem, I had selected the ‘Gilded Rose’ Kata, for which I have created a Java version a while back. I figured that since this is a refactoring Kata, the type of problem would be all too familiar to my colleagues

    While preparing for the Dojo, I asked for advice on twitter. Luckily, Mark Levison reacted, and had some good advice (“Keep It Simple!”). What’s more, he had documented his own first experiences (first and second dojo) very thoroughly! We had enough pizza.

    Based on that advice, I tried to simplify the kata a little, giving us a headstart by starting off with some already implemented acceptance tests. (See my FirstTry branch to look at the integration tests. I didn’t do that try test-driven (tsk!)). The idea was that this would save us some time, and let us jump right in to TDD-ing the refactoring.

    I also figured that it would be wise to do a little introductory presentation on what a Coding Dojo is, and what is important for Pair Programming and TDD. I found a nicely made presentation on slideshare by Serge Rehem. Unfortunately, it was in Portuguese, which I don’t speak. A little Google Translate, and imagination, helped though, and I created a translated version.

    On the evening itself, after Ciarán got everyone warmed-up and enthusiastic with a run through Boris Gloger the Ball-Point-Game, we started the dojo with the 5 minute (well, at my speed, is was about 15-20 minutes) introduction using those sheets, and going into the specifics of working with TDD. Then we briefly discussed the kata, and got going.

    At this point I was very happy with the ‘Keep It Simple’ advice, since we obviously needed time to really get started. We were working with 5 minute turns at the keyboard, but since we were still getting the hang of this, those sometimes turned out to be a little longer. Unfortunately, this meant that not everyone got their chance on driving, but the whole group did join in the discussion.

    We also got some discussion going about the why and how of working Test Driven, which was a lot of fun, and precisely the point of the exercise, of course.

    So what are we going to be doing different next time?

    • We’re going to plan more time. We had about 90 minutes, including the introductory presentation and discussion of the Kata, that didn’t leave us enough time to get through the problem
    • We’ll need to pick a smaller Kata. The refactoring Kata is fun, but it is not small enough, especially when starting out with Coding Dojos.
    • I will not write tests in advance! This helped keep the assignment small, but not small enough, and it allowed us to go too quickly to coding, without really understanding the assignment. This was actually one of the main complaints: the goal of the exercise wasn’t clear enough!
    • We’ll ask everyone to try the Kata in advance, so that we can focus on the process of writing code together, instead of on understanding the problem
    • We’ll time-box the pair-rotating more carefully, so everyone gets a turn

    Overall, though, it was still a lot of fun, and most people really liked the idea of doing real hands-ons learning during our regular get-togethers withing Qualogy.

    Code Cleaning: How tests drive code improvements (part 1)

    In my last post I discussed the refactoring of a particular piece of code. Incrementally changing the code had resulted in some clear improvements in its complexity, but the end-result still left me with a unsatisfied feeling: I had not been test-driving my changes, and that was noticeable in the resulting code!

    So, as promised, here we are to re-examine the code as we have it, and see if when we start testing it more thoroughly. In my feeble defence, I’d like to mention again why I delayed testing. I really didn’t have a good feel of the intended functionality, and because of that decided to test later, when I hoped I would have a better idea of what the code is supposed to do. That moment is now.

    Again a fairly long post, so it’s hidden behind the ‘read more’ link, sorry!

    Contine reading

    Code Cleaning: A Refactoring Example In 50 Easy Steps

    One of the things I find myself doing at work is looking at other peoples code. This is not unusual, of course, as every programmer does that all the time. Even if the ‘other people’ is him, last week. As all you programmers know, rather often ‘other people’s code’ is not very pretty. Partly, this can be explained because every programmer knows, no one is quite as good at programming as himself… But very often, way too often, the code really is not all that good.

    This can be caused by many things. Sometimes the programmers are not very experienced. Sometimes the pressure to release new features is such that programmers feel pressured into cutting quality. Sometimes the programmers found the code in that state, and simply didn’t know where to start to improve things. Some programmers may not even have read Clean Code, Refactoring, or the Pragmatic Programmer! And maybe no one ever told them they should.

    Recently I was asked to look at a Java codebase, to see if it would be possible for our company to take that into a support contract. Or what would be needed to get it to that state. This codebase had a number of problems, with lack of tests, lots of code duplication and a very uneven distribution in complexity (lots of ‘struct’ classes and the logic that should be in them spread out, and duplicated, over the rest). There was plenty wrong, and sonar quickly showed most of them.

    Sonar status

    When discussing the issues with this particular code base, I noticed that the developers already knew quite a few of the things that were wrong. They did not have a clear idea of how to go from there towards a good state, though. To illustrate how one might approach this, I spent a day making an example out of one of the high complexity classes (cyclomatic complexity of 98).

    Larger examples of refactoring are fairly rare out there, so I figured I’d share this. Of course, package and class names (and some constants/variables) have been altered to protect the innocent.

    I’d like to emphasize that none of this is very special. I’m not a wizard at doing this, by any standard. I don’t even code full time nowadays. That’s irrelevant: The point here is precisely that by taking a series of very simple and straightforward steps, you can improve your code tremendously. Anyone can do this! Everyone should…

    I don’t usually shield off part of my posts under a ‘read-more’ link, but this post had become HUGE, and I don’t want to harm any unsuspecting RSS readers out there. Please, do read the whole thing. And: Let me (and my reader and colleagues) know how this can be done better!

    Contine reading