1 The Process
This chapter is very much in progress.
The rest of the book is in a much more advanced state, but expect significant changes in this chapter.
In this short book, I show how to use story mapping, example mapping and scenarios to bring structure and clarity to the way you discover and refine new functionality. This process is sometimes called ‘Discovery’, since it involves discovering the details of what you are hoping to build. It is also called ‘Refinement’, as it refined the rough, big idea of something you want to do into something more fine-grained, detailed and polished. You can also call it ‘requirements gathering’, if you like, but as you’ll find out, the details of what we end-up building can be quite changeable throughout the process, so the term ‘requirement’ has always been something of a misnomer.
1.1 Discovery and Refinement
This process of Discovery and Refinement is very much the core domain of the Product Owner. Or product manager, product lead, or even project manager, or business analyst. Engineers are also closely involved, and so are testers, UX specialists, and… Well. This process of Discovery and Refinement is actually a highly collaborative one. Next to the roles and skills already mentioned, we often need end-users, subject matter experts, and other stakeholders involved. We are, after all, finding out how we can solve a problem for our users and stakeholders, so having them involved is probably a good idea.
But who is involved, when are they needed, and what does that mean for how you organise this process? With the requisite small print of ‘every situation is different and you need to adapt any process to your own context’, I would like to start out with a high-level overview of a common way to oraganise the process of discovery, and give at least a generally accepted way of how everyone can be involved. This is particularly important because with the large number of people that need to be involved, there is often a bit of reluctance in committing to that involvement. That can result in a situation where the necessary people are not made available, which can and will slow the process down, but also increase the risk of you building the wrong thing, and doing so slower than necessary. And that would be a shame.
1.2 A process view
If we take a look at the process of delivering software, we can arrive at a picture that looks a little like Figure - “A high level picture of the agile development process”. The cycles are there to emphasize that these processes overlap, and are iterative: discovery of a feature doesn’t have to be complete before development of parts of it can start. We can see the full process of discovery, and how that happens before the sprint in which we implement the functionality.
For completeness, a full BDD-style, test-first development process is depicted, but we won’t be going into all the details of that. The main flow, seen enlarged in Figure - “Zoomed in on the discovery part of the process”, is that we start discovery by decomposing a new idea for a feature or product by creating a story map, and then take parts of that map, in the form of stories, and refine those using example mapping and then writing the corresponding scenarios, before using those scenarios as the basis of developing the tests and implementation of the functionality.
1.3 Story Mapping
I’ll go into the details of each of the practices of story mapping, example mapping and formulating scenarios in the next three chapters. Additionally, I’ll go into more detail about using the story map to give you the building blocks of your planning. But before we go into those details, it is useful to look at a few of the questions that tend to come up when we talk about this process of discovery and refinement: what happens when, who is involved, and what exactly do we have in the end?
Story mapping is a good starting point for the proces of Discovery, all about generating the ideas about what you are going to build. Not all of those ideas will eventually be built, but that is fine.
There’s always a previous step, of course. I’m not going to go into where ideas for new products and features come from. That’s between a product owner and their market. When we have an established product, a domain known to all the people involved, then building a new feature can be something we go into without much preparation. Most of the time, though, a little more work needs to be done before we actually come together to create a story map. What work, and who does it?
This starts with the product owner, of course. An idea is never just an idea. You will have things in mind for the idea. How much detail you have in mind will vary, but you might have in hand user requests, behaviour in using the existing application, views on market and opportunities, measurements (or ‘simple’ insight) into processes and bottlenecks, etc. In the end, that is all background information that has triggered the specific idea of the new feature or product. I often ask, as the minimum preparation, Product to come to the story mapping session with a few stories of how an end-user might use the new feature to make their life better. That is very near the original definition of a user story, I know, but those are just extremely powerful.
Example Use Cases
False Alarm:
When Ron’s car alarm goes off, even though he’s two blocks away at dinner,
he gets a notification on his phone, can view the car’s surroundings through
built-in cameras, and being reassured can switch off the alarm through his phone.
Theft:
When Ron’s car alarm goes off, even though he’s two blocks away at dinner,
he gets a notification on his phone, can view the car’s surroundings through
the built-in camera’s, and seeing his car is being broken into can call the police.
Next to the preparations around the goal of the new feature, there can be other preparation needed. The Engineers can investigate the system to come prepared with insights into the complexity and current functioning of the application, they could also do research on possible services and libraries to use. UX could do some user research, and generate ideas for possible ways of designing user interaction. The point is that there’s no reason to go into store mapping blindly, but when the feature is sufficiently clear and presents no significant new challenges, there’s also no reason to always do such research. Use your insight into the situation to judge.
I’ll discuss the structure and process of the story map in Chapter 2 - “Story Mapping”. The outcome of a story mapping session is the map itself, of course, but the important elements for the next steps of the process are to have some slices to use for planning, and a set of stories to take into refinement using example mapping.
Another question that will regularly come up when discussing these processes is when do we do these things. In the case of story mapping, we normally do this a few sprints ahead of when we actually expect to start work on the feature. Time enough to allow for refinement of the first slice before work start, but not too long that we leave too much time between our discussions of the feature and the implementation. There are, however, situations where things do not work out that way. This is usually due to organisations that are using quarterly planning cycles and expect a significant amount of detail for that planning. I’ll discuss how to deal with that sitation with the minimum damage to our goals of short cycled delivery in Chapter 5 - “Planning”.
1.4 Example Mapping
The next step of refinement is that of example mapping. In example mapping you take a story from the story map and dive into the details of the business logic that is the core of that story. With the same reasoning as with story mapping, it is quite normal to go into example mapping with some measure of preparation. And again, it is usually the product owner that comes with a starting point for the session with some business rules, or acceptance criteria. And as is true throughout discovery and refinement, the input of other viewpoints during the process is important to come to a more complete picture of the story.
Example mapping is usually done one or two sprints (or an equivalent time) ahead of the development work. If you work in small, sprint-sizes slices as recommended in Chapter 5 - “Planning”, you’ll refine all stories in a slice in time to take them into the sprint you develop the slice. Example mapping is done with the ‘Three Amigos’, all the different points of view we can include, including at least product, engineering and testing. It is usually done with the whole team, though if your team is very experienced it could be representatives, as long as there’s a feedback moment with the whole team.
At the end of example mapping, you will have clear examples of each business rules, and the process of generating those will also trigger the discovery of additional business rules as well as additional stories that may be necessary to include in the slice. The structure of the business rules and examples can also give clues that the story can be split into smaller stories, as you’ll read in Chapter 3 - “Example Mapping”.
1.5 Formulation
For the examples you find in example mapping, we can write them out in a formal language, formulating the examples in the Gherkin language. This process helps making the examples specific enough that any inconsistencies become apparent. This process of formulation is very much involved and requires some level of expertise. It is described in detail in Chapter 4 - “Formulating Scenarios”. Though this is sometimes done cooperatively with product and engineering and testing working together, more often and preferably it is done by the engineering team (which includes the testers). There’s a good reason for that: by having the team formulate the scenarios and preseting those back to the product owner you create an extra feedback cycle to check that the team has understood the business rules correctly. And that the language used by the team aligns with the language used by the business.
This means that the team will formulate the scenarios, again in time for the sprint in which the story will be implemented. That means this happens in the same timeframe as the story mapping of these stories. The scenarios need to be approved by the product owner before the story can be seen to be ready.
After you formulate the scenarios, they are part of the story’s definition, and we are ready to go into delivery. That does not mean that the scenarios are the only thing needed to be ready to develop the story. There can be other elements, such as UX design, user flows, non-functional requirements. But the core business rules are the core of the functionality.
1.6 Delivery
With the story ready, the team can take it into delivery and start development. The advantage of having well defined scenarios is that you have an effective start of a test-first approach, and there are many existing tools that can take these formally written scenarios and use them as tests, and even creating what is known as living documentation: documentation of the functionality that is executable and therefore cannot become out of sync with the implementation. This is a further step, and not necessary to reap most of the benefits of this structure refinement process, but if you can it will provide its own benefits.
1.7 Planning
This book ends with a chapter on how to use the story map and its slices to create a planning, and update it based on complications and changing circumstances.
1.8 Example
Throughout this book, I use an example from an insurance app, from the fictional company ‘InsAny’. This company has been trying to disrupt the insurance market by making it easier to apply for house and car insurance, and smoothing interactions between customers and insurance companies, like ‘BigInsured’, who are InsAny’s main partner.
The example in this book uses a new feature for this insurance app, where we create a way for customers to apply for a car insurance package. In the planning chapter, we also add the feature to apply for home insurance, so I can show how to plan a new feature like the from the start. I’ve tried to keep these examples simple enough to understand without knowing much about the insurance world, mostly focusing on familiar parts of them for the more detailed examples.
2 Story Mapping
This chapter introduces the concept of the story map: a way to visually show the user flow for a specific feature, and then detail all the different variations of that flow as individual user stories in a two-dimensional view.
Story mapping is a tool to discover user stories necessary for a new feature. Here, we discuss the process of creating such a map and take you through the practice from the beginning. Not only will a story map give a much clearer view of the backlog of work for a specific feature, it also helps create structure that allows for easy planning and prioritization.
I will give plenty of focus on different ways to generate different versions of each step in the flow. Grouping those variants in logical ways into slices will give you a ways to deliver the functionality iteratively, and plan that delivery.
I’ll also go through a few different example flows to show how to deal with types of features for which it might not be obvious how to think of variants.
- A story map is a way to look at a feature and divide it based on the steps of the user journey, and different variations of that journey.
- By creating the story map you get a more detailed view of an existing feature, allowing you to make decisions on what is important to test.
- The different variations of a user journey can be based on:
- Different outcome for the user
- Different inputs triggering different flows
- Different business rules bring different complexity
- Different types of users
- The most important aspect of story mapping is getting all the right people involved.
- Different variants (slices) of a user journey can be used to make decisions on scope and priority.
2.1 What is a story map?
Story mapping is the brain-child of Jeff PattonPatton, User Story Mapping: Discover the Whole Story, Build the Right Product.
. He came up with the concept as a way to give more structure and cohesion to the idea of a backlog. The traditional agile backlog as it’s been popularized as part of scrum, is to have a list of user stories that is strictly ordered in priority. It is recommended to have the items on top of the backlog to be high in detail but small in scope so they can be implemented by a development team without any more preparation being needed. Items further down the backlog can be less detailed and bigger in scope, and should be broken up and detailed as the time when they might be implemented comes nearer.
2.1.1 The structure of a story map
The problem Patton described when working with a backlog is that the cohesion between stories is not visible. If, as should happen regularly, stories belonging to different features are worked on interleaved, then it can become hard to see when one of those features should be released to customers. Even when the stories being worked on are all part of the same feature, the good practice of splitting the feature up into its smallest components can make it difficult for both product and development to keep the bigger picture in mind.
I’ve defined a feature as functionality that helps the user achieve a goal. The user story map is a collaborative way to start with a feature and split that up into smaller stories as well as iterative releases.
The backbone of tasks represents the main user journey
The basis of a story map is shown in the figure below, showing the example of what the car insurance feature of InsAny might have looked like in the beginning. The backbone of the story map is the user journey flow. Each step in the flow is a separate column. The backbone is the flow through the stages of a user journey: separate steps, or user tasks, that the user goes through to achieve their goal. The level of detail you want for those tasks can vary for different situations, but as you can see in the figure they are formulated around what the user needs to achieve, as opposed to detailed requirements that you might have for these steps. You do have ‘Find Insurance Options’ but not ‘specify type of insurance’ or ‘specify cost range’.
The detail is in user stories
Those more detailed elements of the functionality do show up in the story map, but as different variations of the task, captured as separate stories in the same column. As we’ll discuss ahead, when you are building your story map, it is very natural to have to sometimes move something you’ve put on the board as a task to the level of a story, or vice versa.
Activities can be used to create structure
In more complex features, there are often natural clusters of tasks that seem to belong together, but can’t be seen as variations of one step in the journey’s flow. For instance, applying for insurance options in our app can have steps that allow the user to search, filter, view, compare and select an insurance package, while also having steps that request the package, provide personal details, receive an offer (or get declined) and accept the offer. You might group these as separate features, but the user’s journey through the application would go through all of them and if you see them as one journey, the first group could be grouped as an activity “find insurance package”, while the second might be “apply for insurance”.
Personas can mark different types of users for activities or tasks
In fact, as you’ve seen, parts of the ‘apply for insurance’ journey goes through a back-office process that judges, automatically or manually, whether an applicant can be accepted for an insurance package. When you first add a feature, it would be quite natural to also include the back-office functionality that will be needed. By placing the functionality as a separate activity, and marking that with the back-office support user as its persona, you can keep the different interests of the different types of users clear and visible.
2.1.2 Slices for Iteration
When you examine the example story map, you might notice that the stories capturing different variations of a task sometimes are similar, and can be clustered together. If you are developing new features, that can mean that such stories can actually be implemented together. The way we order the stories under a task is meant to help to split the larger feature in slices of the map that can be logically developed or released together. That covers iterations of a task, as well as of the feature as whole, grouping stories across different tasks together to have a coherent whole to deliver.
2.2 Creating a story map
To get started with our story map, we will use a feature from the example insurance application introduced in Chapter 1 - “The Process”. For this application we’ll look at the feature of applying for insurance.
If you are familiar with your domain, then as soon as you start thinking about a bigger new features, such as the ‘Apply for Insurance’ feature, you’ll think of ways it could be broken down into fundamentally different flows, for instance: one where the customer gets approved, and one where they don’t, and another more concerned with the workflow in the back-end system. When you are just starting out, it can be difficult to see exactly where one variant will stop and another will start. Often, it is easier to just start with the highest level, focus on the user’s goals, and simply let the process of story-mapping generate whatever variations might be needed. So that is exactly what I am going to do in this instance.
Start with the goal in mind.
I will take you through the ‘apply for insurance’ feature, drawing in all the parts of it initially, and show how the story mapping process provides a natural way to decompose the feature into different parts that can be separately built, tested, and delivered.
After going through the process of building a story map and splitting the feature into separate variants, I will discuss the best way to organize those sessions, with the right people and tools. The chapter ends with a look at prioritizing, where you can decide which slices you should take to go in for a deeper dive with the example mapping practice that is explained in Chapter 3 - “Example Mapping”.
2.2.1 Using a feature to find the story map’s backbone
The first step of creating the story map is to build its backbone: the tasks that comprise the user’s journey through the application. Whether you’re dealing with a feature that is exposed to end-users and has a user interface, or some back-end system supporting a company process, the way we get to the initial user task is the same: just go through the steps through the process and write them on post-its and put them on the board. As always, try and get multiple people, preferably from different points of view, to help you get as well-rounded view of your flow as you can. At the same time, do not spend too much time on this and try to make it perfect: this will change.
For readability, the tasks are listed here:
- Select insurance package
- Confirm choice
- Provide car details
- Provide preliminary car incident data
- Provide personal details
- Provide co-driver details
- Provide payment details
- Select package variants
- View package legal documents
- Confirm application
- Receive offer
- Inspect data differences (incident data, co-driver data)
- Confirm application
- View active insurance package
A flow like this may be complete from the point of view of the end-user, but it is quite normal that these types of flows also have a ‘backoffice’ element to them, which happens outside of the view of the end-user, but has important steps that will need to be supported when you implement the new feature.
While working on that first version of the backbone, you will find that as you go over it with different people new user tasks will be discovered. Sometimes because there’s. a whole different persona, or sometimes, as shown below, because depending on the outcome of a previous step there can be multiple directions the flow can go in.
Generating the first view of the backbone of the story map doesn’t have to take long, but it will require some iteration to get to a view that is useful for our purposes. As you go through the process of creating your story map, you will find you will have to move the post-its you put on the map up and down repeatedly, moving from activity to task and back, and from task to story as well.
Don’t worry about getting it perfect. It will change.
Let’s take a further look at dealing with different levels of detail in the story map, and how you can decide whether something should be an activity, a task or a story.
2.2.2 Discovering structure
Once you have a first draft of the backbone of a story map, as shown in Figure - “Adding the acceptance flow”, you can see if there is any way to discover more structure in it.
The first distinction to make is whether we are looking at a task or an activity. It’s important not to get too attached to your decisions about this. You will have to revisit them and should feel free to do so. In fact, you’ll notice that it is much easier to identify possible activities when you already have most of the user journey in front of you. There are some useful rules of thumb to help get to a balanced map, though.
The easiest way to start is to simply make everything a task.
As you saw in Figure - “Adding the acceptance flow”, this gives us a very long, or wide, flow. There’s nothing wrong with that, but if you end up with a long list like that, it might make it difficult to keep sight of what is in there.
Each task is phrased actively, with a verb. In other words: someone is doing something! That someone is normally the user, though we’ve seen that there can be different types of users. To help decide on the level of detail for your tasks, you can go back to the concept of a goal for the task. I said earlier that for the whole of the user journey, focusing on the goal the user is trying to achieve for the whole of the flow makes it easier to identify when something can be seen as a feature. When considering whether a task is a task, just keep the same idea in mind and look at finer-grained goals to achieve. The goal for a full user journey can be seen as achieving an outcome for the user. In this case, that would be ‘getting car insurance’.
Each task has a goal.
Individual tasks will be parts of the flow where the user performs a task that is whole and complete. What that means is that the task is a step you would not normally do only part-way. You would not stop and continue it later after doing something else. For instance, let’s take the task ‘Provide personal details’ that we have in the current flow. That task name doesn’t give much to go on about which details need to be provided, but let’s assume that we need to provide things like the name, date of birth, address, phone number, and gender. An application might guide the user through different screens, but the information together serves a single goal. We could have provided tasks for each of the pieces of information and, certainly, each could have its own business rules attached that we might want to test. But in this context a ‘provide phone-number’ would not be a suitable task in the backbone of this story map. It would be too detailed. A user would not give their name and then come back some other time to give their phone number.
In other words, the level of detail at which you define tasks is fluid. Depending on the complexity of the feature, we might have smaller or larger tasks as part of the flow. More detail might be given when you decompose tasks into different variations, for which we will use the term ‘User Stories’. At the same time, some tasks in our flow may seem to belong together. They help the user achieve part of the goal that the user journey is aimed at. Since we want to capture that sort of structure in a story map, we make room for showing that structure as ‘activities’, which are added as headers for a number of related tasks in the following figure:
As you can see, activities simply show a slightly higher level of abstraction for the goals the user is trying to achieve, grouping a bunch of related tasks together. There is no reason we could not have started at that level of abstraction and worked our way down instead, which would have looked like the next figure. It is usually much easier to see what that structure should be when you start by listing the more concrete tasks first.
The two levels of detail we have in activities and tasks together give us the backbone of the story map: its base structure. When we add stories we will have three levels of detail. The primary difference between those levels is the scope of the goal the user is trying to achieve: from the whole journey, which decomposes to each activity, which decomposes into the separate tasks. Stories will then provide different variations of those tasks. Since these things are relative, and not absolute, you will find which is the right level by working on all those levels during the creation of the story map.
All that is just a very long-winded way of saying: “Just get on with it!” Don’t wait until you know the perfect place or the perfect phrasing for a task but just write something down, put it on the board and keep moving things around until you find the right balance.
Each task, activity and story represents an action, and each action works towards reaching a goal. Let’s see how user stories work in this context.
Stories or tasks?
In the same way that tasks are similar to activities, just focused on a more limited goal, stories in this context are the same as tasks with a yet more limited, and more detailed, goal. For now, I will not go into more detail about what a good user story is and how to create one. That is a very important topic, but one that is not very relevant for the purposes of this book. In the context of a story map, stories are how we decompose a task, with the purpose of discovering different variations of a user journey. One of the common first steps to start discovering those variations is realizing that some of the tasks you have in a flow are not actually different steps in a workflow, even if they are sequential steps the user may go through in the user interface. Those tasks can then be grouped and used as a starting point for the next level of breaking down the story map’s backbone into smaller parts.
As an example, take the tasks for the ‘Provide data’ activity. At first glance, those are separate steps through the process of giving all the possible data necessary to request the insurance.
Not all possible data is necessary to request the insurance package, though, and parts of that data may only be relevant for very specific cases. For instance, the ‘preliminary car incident data’, giving information about what sort of accidents and damage has occurred with the vehicle, is relevant because based on that data, different variants of the insurance package will be offered. But even when it is not provided, the car incidents will always be retrieved during the flow in the back-office to check that the package selected can be offered. This step in the flow is part of providing the car details, and it can be skipped entirely. That means that task is more of a variant of the flow than a core part of it, so we can move it down to the story level, as shown in the next figure:
The same can be done for the co-driver details, without which the possible package variations and prices might be different. Payment details, though, will always be an important step in the flow.
You will have noticed that the data provided in steps that have moved down can actually influence the behavior of a later step in the flow. That is very common, of course, and something you encounter frequently when capturing behavior in a story map. The variation of behaviour later in the flow is only relevant when that specific information has been supplied, and that means you might add those different stories, in different parts of the flow, to the same slice, and deliver them together as a new iteration on the flow.
Fidelity: splitting off enhancements
Of course, just moving tasks down is not the only way to end up with a story. You can also decompose a task, in a variety of ways, to generate stories. When creating new functionality we are be mostly focused on splitting a task into different ways of achieving the goal, with different levels of fidelity: car details could be entered by hand, but a version of collecting car details with higher fidelity would be to have the user only provide the license plate, with the information being retrieved from a back-end service. The car’s purchase date might be entered as numbers, or selected from a calendar widget. This type of iterative delivery is an important part of successfully using agile processes, with usability improving in subsequent iterations.
With these different variations of individual tasks on the story map, it is now time to look at how those stories can be grouped together as slices of the map. We can do that on different grounds, and the next section will discuss the options for that.
2.2.3 Variations on a flow
All of the ways we split up a feature on the story map are about being able to deliver in small increments of value, both for individual stories as for combinations of stories that we may (or may not!) want to release as a whole. Each variation of a flow, also called a slice in a story map, is a different flow that you identify, name and can document, test and deliver separately.
When building the story map, you are interested in the flow, but not so much in the details of what happens within every step unless it impacts the journey. That means, for instance, that you do not create separate stories about the validation of a bank-account number that is entered as part of the ‘Provide payment details’ task, even though it might have ten or twenty different complex validations that are necessary to complete it. That doesn’t mean we will ignore that type of detail, in fact we will very likely use the identification of those validations to split the story further when we refine it in detail. In Chapter 3 - “Example Mapping”, I will describe in depth how to capture those types of business rules and use them to split the story if necessary. You can add the newly split stories to the story map as you create them. In Chapter 4 - Formulating Scenarios I will also explain how to directly turn those into tests that can be automated.
Limit the amount of detail for stories in your initial story map.
The different kind of slices
When I start a new story map I usually start with the backbone, in the way that I’ve described above. At the end of that you tend to have a backbone, maybe some activities, and very likely some stories, that were moved down from the user tasks in the backbone while making it. The next step is usually one of just free brainstorming, where I let the team create as many stories and just put them on the board under the task they belong to.
Then, I ask the team, or really usually the product owner, to create a few slices based on the different moments of delivery. That can be different versions of the feature they already had in mind to deliver in stages to the end-user, or even just milestones in a project they have in mind. I call those slices ‘release slices’, or ‘slices of functionality’, and talk more about them in Chapter 5 - “Planning” - “Slices of functionality”. I see a lot of organisations where this is as far as they go, but I like to put a little pressure on to make sure we really try to find smaller slices.
When we have our release slices, I tend to ask the team to judge for each slice whether the slice is something that could be done in one sprint. Or, if they do not use sprints, maybe in one week, or two weeks, or whatever a good release cadence is for them. If they do not think it fits (and it rarely does, on the first try!) then we split the slice up into smaller parts that do. Very often, when that needs to be done, some of the stories also need to be split to be able to make a good and useful slice. This way, we go to a state of the story map where each slice is something that can be released in one sprint. Since the sprint is usually the cadence in which we plan, that means that each slice is exactly the size of the building blocks we need for planning. I call those smaller slices ‘planning slices’, or ’slices of fidelity, and I go into more detail in the Chapter 5 - “Planning” - “Slices of fidelity”.
For all of the types of slices, we still treat them as real increments of the features, and we give each a descriptive name, so that we can easily talk about the scope of the functionality we are creating, prioritising and releasing. But let’s take a look at how we can end up with useful slices.
Different ways of finding new slices
If we don’t go into the full detail for each step up front, how do we find different variations of each journey? I now want to go through the different bases on which to identify slices on our map, based on:
- Outcomes for the user
- Extending the flow based on previous input
- Simple versus complex situations
- The user or persona
Splitting the flow based on outcomes for the user
One way to identify a variation of a flow is to look at the different ways a flow can end. In our example, there are variations where the application for an insurance package is successful, and where it is not. Unsuccessful for the user does not mean that this constitutes an error or even undesirable result: it is an important function for the system to avoid insurance customers that are too risky. The way an unsuccessful result is handled is particular, though, and involves further interactions with the user that are specific for that case. The different state, or outcome, of part of the flow splits it, and that is a good way to identify different variations.
Of course, the reason for the rejection could be any of many different things. Most would only result in a different message to the user, perhaps, so you would not see them as different variants. Others might trigger entirely different partial flows, perhaps requesting specific documentation or a live video-call to confirm identity, and those you could mark as separate variations that need specific attention to refine and build.
Splitting the flow based on previous input
Another way to identify variants of a flow is to isolate a partial flow based on the input the user has given in earlier steps of the flow. That way you can identify a flow through the feature that touches a significant part of the journey as quickly and simply as possible. A simple version of the ‘Apply for insurance’ flow would be one where the user indicates there is no second driver. If they do not select a second driver, then there is no need to include the steps and stories that deal with entering the second driver’s information.
Splitting the flow based on a simple path through complex logic
Simplification is not just about skipping potential steps in a flow, though. It is also about selecting the situation, or data, that ensures the most straightforward path through any business rules. Primary driver information that always guarantees acceptance, or requires the least additional information, would not significantly alter the steps the user takes through the flow, but it would mean a number of other logic to implement those business rules will not be necessary yet. For instance, simply setting the driver’s information as having had a driver’s license for more than 5 years, and not being older than 60 will prevent the need for code in the back-end that triggers retrieving information about possible incidents the driver might have been involved in with from a third party system.
Splitting the flow based on the user or persona
Some parts of a feature can be meant for a different user, or type of user. The back-office user has their own goals, and it’s easy to simply split that off into a separate journey. There may also be different types of users for the regular application, though. In UX terms, we often talk about having different personas that might use the application in different ways. If we have a persona that is interested in insuring multiple cars, for instance, that could result in much more complex flows and logic. You could view those variations as separate, even if some of the steps through the ‘Apply for insurance’ flow are the same for such a user. Simply by identifying that we do not have to accommodate the more complex data and logic for such users, and putting that aside as a specific variation (slice), can help focus on delivering for a larger set of users first, but also identify that there is additional complexity supporting those niche users to be focused on later.
Slices for different types of systems
The features I’ve gone through so far have been based on user interaction with the system. You’ve seen that a feature can be spread over different types of users, and completely different user interfaces. There are different types of applications that do not have the same match with a user interface flow, though. Some systems are batch-processing large amounts of data. Some are embedded in hardware. Some features are seen as supporting ‘utility’ flows such as install and update. And we can have machine learning features that have their own particularities.
So, having discussed the version with clear user interaction earlier, let’s look at how to map the functionality for features of different types:
- Batch processes
- Non-feature flows
Slices for batch processing systems
One area where it can be hard to imagine the story map is batch processing of data. After all, the only user flow that comes to mind is that of ‘file is picked up’, ‘file is being processed’, ‘result is stored’. That is an oversimplification of what is happening, of course.
When you are processing data it can be simple enough that all that is happening is that the data is being loaded into some other form of storage, and no additional processing is taking place. In those cases, though, it is unlikely that you’d need to spend much time at all on that feature: simple, straight-through processing is not very complex to define, and will be easy to test.
As soon as there’s any sort of processing being done, though, there might also be different variants of the processing happening. An example is financial transactions for a bank that are loaded in a batch. Depending on the data for the transaction, there might be considerable differences in how they are processed. A transaction between two accounts at the same bank, in the same currency, is the simplest case. A transaction to another bank, still in the same currency, is more complicated and require the use of a dependent system. A transaction that changes currency might involve a currency exchange system. And how about savings accounts? Are those handled differently?
Even if the story map’s flow is still quite simple, you can see the same principles at work as you’ve seen when starting from a user interface.
Slices for non-feature flows
There are some important user journeys that are often overlooked by UX and product because they do not have a very recognizable user interface. Installation and updates are a good example when talking about an app people will install on their phones or computers, but the same is true of installations of corporate software: it can be significant work to ensure these processes can be successful, and it pays to regard them as first-class citizens.
How would installation or update be represented in a story map? The steps in these processes are much more technical in nature than even the ones we just discussed for batch processing. That does not mean that you can’t find them, though, or discuss how the different parts of the process can be influenced by different situations, user data and prioritized based on risk.
Let’s consider the situation of that application that is updated on a phone. While updating the application itself is handled by the phone’s provided system and we can’t influence it, what happens when the new version is installed is up to us. We will need to deal with reading settings, perhaps updating the local data storage schema, using or invalidating any caching that we’ve done, re-using login status or forcing a new login, dealing with an update from the last version but also being able to switch from a version a year old, and then there’s the curious case of dealing with local data in a down-grade to a previous version.
Discovering these steps and writing them out is going to be mostly dependent on the input of the developers, but ordering the different situations in sets that you can develop, test and prioritize separately is as always, a team effort. You can order them into different slices, and prioritize which ones need to be tested frequently, and which only when there is a relevant change.
The higher the cost of doing updates, the less frequently they will be done. And updates are crucial for any software system. Without regular updates, security cannot be guaranteed. When it’s difficult to perform an update, you’ll do them less frequently, and will be less practiced and fast doing them. That means that security issues will be in the system for longer, and that when there’s one that needs immediate action, you may not even be able to get it out fast enough to limit the damage. Not being able to create and deploy updates is one of the core indicators of having a legacy system.
Not being able to create and deploy an update whenever one is needed is one of the more important indicators you have a legacy system.
When dealing with corporate, or just server installed, software, the same considerations exist, plus an additional aspect of complexity when we need to deal explicitly with dependencies. In a modern services architecture, special care is always taken to ensure that each service is backward compatible as well as forward-compatible with its dependencies, including database schemas and configuration. In a legacy system, though, it is common that special care needs to be given during updates and installations to ensure that compatibility is maintained, sometimes requiring all components to be updated together. That sort of fragility is a prime reason that updates fail so often. Improving such functionality can be very high priority, and you can still use the story map to decompose the process, find out what is already in place and identify the work to be done. Analysis of such a process would need the inclusion of the system’s operations engineers in the mapping session, and their knowledge and experience of problems and failures of updates in the past are just as relevant for the discussion as the input of testers is elsewhere.
2.3 Organizing a mapping session
Now that you have a clear idea of the kind of results we are aiming for, let’s have a look at some of the particulars of organizing and facilitating a mapping session.
2.3.1 A map is collaboratively built
A story map is not a static document that the product owner builds. As with most activities related to requirements refinement in agile, these structures are built collaboratively, with all the different roles involved in the process. We will make sure we involve the Three Amigos to make sure we capture the requirements from all points of view. This section will go into more detail on how to get everyone actively involved, and how discovery of existing features can be structured.
2.3.2 The people
I’ve already emphasized that to get a well-rounded view of the functionality, you need to involve different people in generating the story map. The concept of the Three Amigos, originally coined by George Dinwiddie, is a recurring theme whenever agile practices around requirements are discussed. The ‘three’ traditionally was explicitly aimed at the product owner, developer, and tester. In many early agile teams, those were the only roles available. As the concept of the ‘whole team’, or multifunctional team, got extended over time, different other areas of expertise were included in the team and there was often a differentiation between front-end and back-end developers, UX and design, operations engineers, etc. The core concept of the three amigos remains the same, though: get as many different points of view involved in the process as possible!
Get as many different points of view involved in the process as possible!
So, when you go into your story mapping session, that is exactly the goal. Start with the traditional three amigos of product, tester and developer, but add in people with different points of view, and knowledge of different aspects of the product. As examples, you could add people from support that know the impact of failures and defects on end-users, you could explicitly seek out reprentatives from (internal) users of the product, or developers from systems that we depend upon to understand how those impact our own logic and risk, designers from the UX team to include their view of the flows, and, as you’ve seen in the last section, you can include people with specific knowledge for the feature under consideration such as operations engineers and data scientists.
There is a strong preference to having these people together in a common session, as opposed to providing input separately. The objective is to quickly get as a broad a view of the functionality as possible, and a significant part of the benefit of these sessions is in the common understanding of the system. There are, however, parts of the work that can very well be done by specific individuals, and can be worked on separately. In particular, the preparation for the story mapping session, and doing further investigation on open questions that come out of the conversation. The conversation, though, needs to happen. The conversation triggers thinking and highlights differences in interpretation between the different people involved, and that helps get to the truth. So even if individuals are investigating and preparing the mapping session, it is crucial to get them together to assemble the map.
The conversation triggers thinking.
2.3.3 The schedule for a story mapping session
That brings me immediately to the questions of scheduling. It would be great if you could just get together and hash out the final, complete and comprehensive view of a feature in your first attempt. That is neither the goal nor possible in any but the simplest of cases.
So, accept that as you delivery of your feature, you work iteratively as well as incrementally. That means that while you prioritize which feature to work on, you also need to accept that you can’t, in one single step, do all the work to get the understanding and documentation of a feature comprehensive and complete. And the same is true for the work getting different slices, or variants, of a feature: try to get a value adding and risk reducing overview as quickly as possible, but embrace that this will never be exhaustive.
Your first session will be a good start, not bring you to the complete picture.
You start with a single session to discuss a feature. Sometimes, you want and need some of the participants to prepare. Developers could look at the code for related parts of the system. UX people might look at similar functionality elsewhere, in your own product or outside of your company. If your organization is one where there is little slack to provide room for that sort of work, you need to plan in some time for the people involved to do this. I’ve had situations where this is difficult, and raises questions. Oddly enough, extending the time reserved for the story mapping session, to the point where the preparation can be done in the session’s time-slot, is hardly ever questioned, so take that work-around into consideration.
A normal mapping session for a team of people that is experienced doing them should not take more than sixty to ninety minutes. In general, for these types of sessions, it’s better to assume you’ll have multiple sessions on the same feature from the start and include that in your planning. You still give each session a clear and usable outcome, but those outcomes can be iteratively delivered.
When you are setting up your first story mapping session, take some extra time. I’d suggest planning two hours, and taking a half hour at the start to go through the goals, format and expected outcomes.
After that kick-off, the next step is generating the backbone of the map. Use the prepared materials, and generate the tasks quickly. I’d not expect more than 20 minutes to get that first attempt. As you’ve read in this chapter, you will then re-organize those tasks, moving them to activities or stories, with further discussions in the same session. There’s no need to be perfect in the first draft.
Generating different variants for each task can be a long process, and is one where I sometimes split the team up to generate options for different tasks before coming back together to discuss and extend that set. I’d not do that in the first few sessions you do with any given group, though, as it is much more important to get a common understanding of what a task, activity or story looks like.
The risk here is staying with one task for too long, so if you are facilitating the session, put a time-limit on each task, or simply a limit on the number of stories/variants generated for the task. It is more important to get the initial ideas and knowledge out than to get every detail correct.
Leave about 20 minutes at the end of the session to re-order the stories and generate different slices. That should be enough to get at least a few slices out. If there’s many stories left under the line of the last slice you’ve been able to define, simply leave those for a future session.
2.3.4 The kick-off
The kick-off for the session is simply to re-state the goals and format of the session. If most of the participants are new to story mapping, this can take a little longer, and if everyone has gone through the process before it can be a real short reminder.
In the kick-off, describe:
The goal of the session: To agree on the main user journey for a feature, and discover the most important variations of it, with a focus on risk and testing
The format: User story mapping, with an overview of the story map and it’s characteristics, and the steps we will take to fill the map
The expected outcome: The backbone of the story map, and a set of prioritized and named slices/variants
The goal can be described shortly by introducing the feature you will be working on. It is really helpful explain the priority decisions that caused this feature in particular to be picked up next. It’s important to emphasize the iterative nature of the process again (and again), and emphasize that the different variations of the path through the user journey’s flow are important to get an understanding at the highest level.
Then go through the format and explain the different parts of a story map, starting with the tasks, then activities and stories, and how these are different levels of granularity of looking at the same thing: steps users go through in using the feature. Explain how the first step through the process is creating the backbone of tasks, before creating structure with activities and then generating variations for each task with stories.
The outcome is a natural result of the process described, but it is important to emphasize that the result will not be complete, and that you are trying to find the most important (based on value and risk) variations, or slices, and that you will prioritize them during session. Emphasize that part of the result of the session will be a list of open questions, including an agreement on how those questions will be answered before we have another mapping session together.
2.3.5 The conversation
The conversation during a mapping session is the most important part. The goal is achieving common agreement on what the functionality of the feature is and how it impacts risk and testing. The way that we achieve that agreement is through the mapping exercise.
Avoid too much detail.
During the conversation, there’s a few areas to keep an eye on as the facilitator. The most important is the tendency participants will have to try to be complete at every part of the process. Long discussions can ensue over specific details. That can be about whether a task should be an activity, or a story, or simply going into too much detail about specific business rules. Often, it is enough to simply stipulate that there is a business rule, and leave getting into the details of it for a later, example mapping, session as I will show in Chapter 3 - Example Mapping. It is important to keep reminding the group that the process is iterative and they will be able to come back to this feature when they see the need to do so.
Get everyone’s viewpoint.
Another issue to keep an eye on is when the conversation gets too one-sided. This is a general issue when facilitating group sessions, of course, but with our focus on getting the multifaceted input from different points of view, it is extra important to pay attention to anyone that dominates the conversation. Sometimes the main input is from one person, and you do often start with a description from a product or UX person, but if there does not seem to be much input from the rest it helps to insert explicit questions to specific people to check whether the current view on the map matches with their knowledge of the system, the process, needed data, and whether there’s anything missing. Again, once everyone is used to the format, it can be useful to split the group up to parallelize some of the work, and that can have the useful side effect of limiting the impact of that single voice.
2.3.6 The room
The advice for the room used to be very simple: ensure you have a big wall to put the cards or post-its on, and room for the group to stand around. Take pictures when you’re done, and if possible keep the map up to continue on in the next session. In the post-pandemic age, we need to include the possibility of arranging the session on-line, of course, and that set-up can have its own challenges.
In a physical space, the most important thing is to arrange things so everyone can see the map easily, and read every card on it. My preference is to have a large map on the wall, so people can walk along it, pick up cards and put new ones on the wall without getting in each other’s way. I have, however, also run sessions quite successfully with the map being on normal-sized cards laid out on a table, where people were standing in front of the table. There’s a little more jockeying for space to reach the different parts of the map, but it can work if you don’t have too big a group.
Of course, having the right supplies is part of preparing for the room. Make sure that there are differently colored cards or post-its for each type of map element: activity, task and story. I try and put the skeleton of the map up before the session starts, so that the form of it is clear from the start and I can point to the different elements while explaining them.
In the case of a remote session, the advice is the same: ensure the room is well-prepared, and the board is already set-up. I’ve used tools such as Miro and Mural for this purpose but the complexity of what you need is not very high, so most on-line white-boarding tools should work. Even though some tools, like Miro, do offer fancy support for story mapping, going as far as to automatically sync the map with your backlog management tool (like Jira) I would recommend not using that sort of advanced setup. Tthe limitations of the backlog management tool will make manipulating the map more complex and difficult. With the many changes throughout creating the storymap, you will end-up with confusing new stories and epics, with many changes, in your backlog management system.
Remotely, you will have fewer problems getting everyone able to see all the cards, and even less getting enough post-its in stock. The remote circumstances do bring some other idiosyncrasies, and not just for story mapping. Though it can be much easier to distribute the way we get new information on the board when we do such a collaborative session remotely, this can get in the way of the main purpose of the session: the conversation and common understanding. There’s different ways you can deal with this. In your first session, or sessions, it is a good idea to make sure that you facilitate the session in such a way that you can centralize the generation of the story map. I do this by being, or assigning, the single ‘driver’ in the group: only the driver has control over the pen, or virtual board, and can write the cards. This way, while you and the group are still learning the process, you will learn it together and build a common understanding of what constitutes a good task, activity or story.
Ensure you keep the conversation central.
In later sessions, once you’ve established those standards, it can be quite useful to have the whole team work collaboratively on the virtual board, and build the first version of it by all working together on the board and writing cards. This feels a little less controlled, but can work really well to get through the first stage. As facilitator of the session, it’s good to be very clear whether you do a centralized or a parallelized session, and to set a clear time-box for the generation of the first stage of the map. Then, bring the whole team back on the same page, by going through the just created board, having the tasks explained centrally, and moving to re-organizing the stories and tasks and defining slices/variants.
2.3.7 The record
With the story mapping session successfully run, you now have a story map! Now what? The map itself is, as we’ve mentioned, made to be updated as we learn more. But what do we do with the knowledge we’ve gathered, and how do we put it in a format that we can use effectively in our approach to improve our legacy situation?
First, the map itself. If it is possible for you, it helps to keep the map up. For a map in a physical location, keeping the map up on a wall, even if you need to move it to a different wall, will help when you need to refine it. If wall-space is at a premium, transferring a map to a couple of flip-over sheets to make it more portable will help in a pinch. Of course, for these purposes the digital version wins: no space restrictions! At the same time, that means you have significantly less visibility for your map.
2.4 Using the map going forward
With a story map, you have a good first step to understanding what a new feature might be. By involving a broad audience and putting all the ideas on the map, you’ve ensured everyone has had a chance to provide input, and by creating slices you have both created small and separately deliverable increments that you can use for release planning, and a clear basis on which you can make and defend scope decisions. I talk more about using the story map as a basis for planning in Chapter 6 - Planning.
At the same time, the individual stories we’ve generated are still just names on a post-it, with no additional detail added. To be able to actually do any development work on those stories they still need to be refined. In the next chapter, Chapter 3 - Example Mapping I do into detail on how to take those stories and dive into the business rules we need them to implement and make those rules explicit using Example Mapping. The results of example mapping can then later be used to capture the functionality formally, as explained in Chapter 4 - Formulating Scenarios, by creating acceptance scenarios for each example that can later be turned into automated tests.
3 Example Mapping
Now you know how to generate different variations of a flow for a new feature using a story map, it’s time to dive into the business rules and logic of your user stories. To achieve that, I will explain the Example Mapping technique from BDD. Example Mapping is very effective at capturing the business rules and helps prepare to explicitly test those business rules.
As I go into the specifics of applying this technique, I’ll discuss some of the history of BDD, the reasons behind the focus on concrete examples and how to arrange the sessions to get the right result.
- Business rules are where the detailed logic of the system can be found.
- Business rules are sometimes documented as use cases, acceptance criteria, test cases, user stories or examples and scenarios.
- Example mapping is used as a lightweight way to structure discovery of business rules, using concrete examples as the primary description.
- Good examples show the working of business rules by being:
- Concrete
- Specific
- Illustrative
- The most important aspect of example mapping is getting all the right people involved and focusing on the conversation.
- The results of the example map can be directly used to write scenarios that are the source of high-resolution, low-level tests.
At the end of the chapter, you’ll be all set to use example mapping with your own teams, and zoom in on the more complicated logic of your legacy application.
3.1 What is a ‘business rule’ anyway?
Classically, business rules have often been communicated as ‘acceptance criteria’, or functional requirements. Over time, there’s been different ways requirements have been formulated. One of the most common would look something like this:
- The application should be manually checked if the license age is less than 5 years.
- The application should be manually checked if the driver is not incident free.
A business rule is simply some decision, based on expected input, that is relevant for the user of the software and that the program is supposed to adhere to. Business rules can be big or small, in importance as well as complexity. There are many different ways to write down business rules, from informal notes, to use cases, to flow diagrams and of course: in code. Let’s have a look at some variations and examples.
3.1.1 Business rules can be big or small
A business rule can have a big impact, or a smaller one. For instance, the type of rules we were talking about above can have a big impact: accepting an insurance application, or not. That is an example of a rule that is truly relevant at the business level.
A rule like that doesn’t just have a big impact, it is a big rule itself: many different variables go into making a decision like that, separately and together. Figure - “As more variables are involved business rules become more complex” is a simplified selection of the large number of variables that can be involved with such complicated rules.
A different rule, for instance a bank that is processing a payment might be smaller in complexity, but still significant in importance.
But, like turtles, it is business rules all the way down. When you get to more finegrained rules, they are often not as specific to an individual business. An example of rules like that are the rules around phone number validation. A few rules on length and format, with limited impact.
3.1.2 Business rules can be composed of many rules
When the rules are more complicated, such as the insurance application, and there’s many different variables involved, there’s a high likelihood that we are not talking about only one rule. That sort of complexity is best handled by deconstructing it to a bunch of separate, related rules. There might be a rule around the age of the driver’s license, another rule around the age of the driver, and a few others around the age of the car!
When you’re building a new feature and are defining these business rules as you go along, it’s usually quite clear that you are talking about separate rules, and that all or some of them together contribute to the final decision that you need. You might even add them separately as the product gets expanded over time.
3.1.3 What does a business rule look like?
If you’re dealing with a classical waterfall organisation, you might be handed requirements in the form of focuments and use cases. When you’re in charge of the process of discovery and refinement yourself, and take the agile approach, the focus will be on User Stories, with the details being documented as acceptance scenarios.
User Story
User Stories came out of Extreme Programming, but have taken on many forms since. In the section about behavior driven development, later in this chapter, we’ll go more into the history of the user story and discuss what makes a good one. For now, we’ll just look at the shape of user stories as we find them in practice.
I want <Goal/Desire>
So that <Benefit>
Even though there’s a template (the Connextra template, to be precise) that everyone knows of, user stories are not very exactly defined. That is, as they say, not a bug, but a feature. Some people will just write the template, often incompletely. Some will have a large amount of information. Some will have test cases included. The story in many teams accumulates all the information relevant to the building of the software. Sometimes in a very structured way, oftentimes very much ad hoc.
Examples and Scenarios
Examples, or (Acceptance) Scenarios are a way of describing functionality in terms of concrete use of it. Creating those for existing functionality is the main topic for this chapter, which will give some indication of how useful I think they are. When created as part of a BDD (Behavior Driven Development) process, this should result in very concrete examples that detail the expected behaviors of one specific business rule.
Rule: Only numerals and plus are counted towards the length of a phone number
Example: Brackets and spaces are ignored
Given a phone number entered as: '(0)6 235 534 04'
When the length of the number is calculated
Then the length will be given as 10\ Examples like this are very much meant to illustrate the business rules that a story needs to implement. As such, they give a very detailed view of the individual rules, their possible inputs, and expected results.
The usual caveats are true here as well, though: there’s a good chance that the scenarios you encounter were not created very carefully, that they do not have the sort of illustrative quality you need, or that they get stuck in describing user interaction instead of business rules. When done right, though, they are the exact type of documentation of business rules that you need. As such, they are also the form of requirements that we are going to be constructing for our existing functionality. I’ll give many more example of both good and bad scenarios in the next chapter, Chapter 4 - “Formulating Scenarios”.
Before I go into detail about how we get to useful examples, let’s have a quick look at where this shape of requirements comes from.
3.2 Behavior Driven Development
Behavior driven development has a lively history. Many different people contributed to the ideas that formed the practice as it is currently used. The term itself, like most good things in life, came into being out of discontent. At the time Test Driven Development had become more popular and as is often the case when ideas move into the mainstream, the ideas behind it became diluted. As that happened, Dan North figured that avoiding the word ‘Test’ might avoid much of the confusion and coined the term ‘Behavior Driven Development’ instead.
Looking at the background, the way we normally look at BDD has its roots in extreme programming’s ‘Customer Tests’: the idea that the best way to capture details about a customer’s (or user’s) detailed expectation of how a system will behave is to formulate those in terms of their way of verifying that expectation.
In XP, that type of test was initially just written in any shape that made sense to the customer and developer together. The developer would use those notes as the basis of the first (unit) tests they wrote when starting development. As time went on, there were some common ways of writing those customer tests, usually in terms of tables of input and expected output. Some tooling was created around that (FIT, and FITnesse), and the term ‘Acceptance Test Driven Development’ came into being.
As part of the incredibly valuable work people like Dan North and Chris Matts did to refine these ideas, they recognized that this way of dealing with requirements was centered around communication and built a very effective process for the refinement of requirements from that idea. With that, more interest came in building tools that allow a (non-technical) human to understand and even write those scenarios.
This brought us to the creation of tools like Cucumber, and the now ubiquitous Gherkin scenario format. We’ll go into more detail about that format in the next chapter. In this chapter we’re focusing on the process with which we can generate the examples which can then later be written down in that more formal language. When we’re discussing examples, going into the detailed format can slow us down too much, so we use an in-between step that allows us to go through examples quickly.
Once I’ve gone through some examples of example mapping, and have given you a good idea of what that looks like, I’ll go further into the process of generating examples with example mapping sessions.
It’s important to understand that when we do example mapping to explore new functionality, the process and discussion around the examples is core. We do that in a structured way, and get results in a form that we can use later to check everything has been understood correctly, but at its core this is a communication process.
3.3 Example Mapping
3.3.1 History
A team that did refinement of user stories using a BDD process would usually start with a description of a story by a product owner, and then in their refinement session start writing the Gherkin style scenarios directly. I often did that with the whole team in front of a whiteboard. If the scenarios were many or particularly complex, it was often easier to simply draw a table on the board to get all the cases clear. An advantage of using a table was also that it was easy to spot cases (combinations of inputs) that you might have missed.
This works really well, and the process of formulating the scenarios has interesting effects in and of itself: the words used can have a different meaning for different people in the team, and going into this sort of detail with the whole team can surface that sort of misunderstanding very quickly:
We were all set to refine the story about adding filters to the main search. The PO described it in plain terms: we need to add a distance-from-location and a category filter to the main search. The development team nodded, and we got to business. I started writing a ‘Given’ on the board, and asked for them to give me the preconditions. The product owner started to describe that there were four possible variations: both filters not set, just location, just category and both in use. One of the front-end developers shook their head: “No, there’s also the other filters. We need to discuss how distance-from-location works if the region filter is also set, for instance. And if we combine category with some of the others, we’re bound to get many situations with no results, so…”
The PO looked confused. “But we don’t have any other filters on the main search!” “Sure we do!”, the developer responded. Then another, back-end, developer jumped in: “Of course we do, all search goes through the main search!” This caused the first developer to look confused too. This discussion went in circles a few times, until I interjected, and asked the PO to show us the ‘main search’ on the screen. “On the main page, of course!” After he shared his screen and showed us on the big screen, it turned out he was talking about the small search box on the site’s home page. “Oh, no!”, the front-end developer exclaimed, “for us the main search is the search page that has all the possible search options!” “Ah! No! That’s not what I meant!” Then the back-end developer piped up: “And for us, we actually have a main search API end-point, that also has all the options, and is used by both of those pages.”
After a short interval in which we defined new terms for each of these interpretations of ‘main search’, and agreed on explicit naming changes in the code for them, we could continue on with writing the scenarios. If we had not discussed the terms, we simply would have built the functionality elsewhere, and made it much more complicated than necessary.
Though very effective in bridging the communications gap (see Gojko Adzic’s “Bridging the Communication Gap”Adzic, Bridging the Communication Gap.
) these sessions were often seen as ‘inefficient’. Admittedly, they were sometimes quite time-consuming. They saved enough time later in the process to easily be worth the investment, but for many teams this was a reason not to use BDD.
Matt Wynne came up with a variation of the discussion around scenarios that was more focused on quickly finding the different scenarios for a story, but leaving the discovery of a common language and formulation of the examples into Gherkin to another part of the process. This way of ‘Example Mapping’ (see “Introducting Example Mapping” streamlined the refinement work and it has become the preferred way of working with BDD.
3.3.2 Structured communication
Example mapping provides a more clear structure for the breakdown of a user story. We still start with the story, and then we recognize that there are a number of business rules for that story. Each of the rules can be illustrated with one or more concrete examples, showing when the rule results in one result or another.
What do those different parts look like? Remember that this technique is used to guide and facilitate the discovery of the details of requirements: the structure of the map is very clear, while there is a lot of leeway in the contents of the individual parts.
The ‘Story’ is a story card in the classical sense of the idea: a simple placeholder around which we have a conversation. That means it is perfectly fine for that placeholder to be very short and to the point. ‘Phone number validation’, for instance, or ‘automatic acceptance’ for our car insurance work.
For every ‘Story’ there are one or more ‘Rules’. Business rules. Acceptance criteria. A business rule represents some aspect of the logic that is part of the story. That can be something very small and clear-cut, such as ‘a phone number is never more than 10 digits long’, or a higher level and more complex rule like ‘if a driver has had their license less than 2 years, they can’t be automatically accepted for insurance’.
Very often, we know some of the rules, but only discover the full extent when we enumerate and discuss them. For instance, if a phone number can only be 10 digits long, what does that mean for characters that are not digits? Are there separate rules for those? And if we don’t use the international ‘00’ dialing code, but a ‘+’ sign, does that count for zero, one, or two digits? We can have a lot of fun getting into the nitty-gritty for even those simple rules, so you can imagine how the set of rules might grow for anything more complex.
In fact, it happens very often that you don’t even realize that a rule might spawn new ones, until you start trying to think of a few concrete examples for the rule. Sometimes, you only need one example, but often you need two or three. If there are a lot of different examples, you can probably split up the rule into more specific ones.
Each example is there to illustrate the effect of a rule on an outcome. If you start with a rule like ‘a phone number is never more than 10 digits long’, you might start with an example that shows a phone number that is exactly 10 digits long, ‘1234567890’, and say that is a valid number. The counter-example would be something that is 11 digits long, ‘12345678901’, with an ‘invalid’ outcome. Of course, when you write down those numbers, the first thing you notice is that they do not look like phone numbers. As soon as you make them look like a phone number, such as the Dutch (made up) number ‘0231234567’, someone might say: “People usually put a dash after the region prefix”, making it ‘023-1234567’. That still fits the rule, but only if there is another rule that says ‘dashes are ignored’, or something similar. Step by step, you generate the different rules, including examples that are as real as possible, to try and get a complete picture.
As part of Example Mapping, then, we tend to discover new rules. We might also raise questions that we don’t know the answer to, and which have to be investigated outside the session. As you can see in the illustration, that is a normal part of the process. We also might think of new rules, but decide that those are too far outside the scope of this story, and create a new story for those. Sometimes we just create the new story, and plan to do an example mapping session for that one later, or sometimes we’ve already generated the rules and examples and simply split them off for a separate story. We are flexible in that.
In Figure - “An example of business rules with examples” you can see I didn’t write down the examples in any detail. Just the phone number input is given, and a checkmark or cross to indicate whether it would be accepted as a valid phone number. This is indicative of how you deal with examples: they should be short and to the point, and be written down in such a way that the people who are in the mapping session understand and can explain each example. Sometimes you do need more, and include the steps to take, responses to expect or other detailed data. Sometimes the example is just a quick note or even a drawing. As long as it’s specific and clear, that is all acceptable. I do get more formal, but only later when I formulate the scenarios based on these examples.
3.4 What do good business rules and examples look like?
To get a better feeling for where you need to take your rules, let’s examine in more detail what makes for good examples.
3.4.1 Good rules
Rules are normally the easier part of this. In a regular example mapping session, the product owner tends to come to the table with a story, and some of the requirements, or acceptance criteria, that they know will be needed. During the session, we’ll likely find some more, but we have a place to start from.
When you are dealing with existing functionality, it pays to do the same: do not expect to be complete, but bring what you already know or expect and take it from there.
Come prepared, but don’t expect to be complete.
The form that rules take is not terribly important. They should be fairly short and as descriptive as possible. In many cases, rules take the classical form of requirements, and are formulated starting with the word ‘should’:
- Should not be longer than 10 digits
- Should accept automatically if the driver’s license is older than 2 years
- Should show the longest words first
There’s no set form for this, and anything you use that is clear to the whole team is fine. It’s easy enough to just leave out ‘should’, and keep the same level of expressiveness. The important thing is that rules are named, and that the name describes the specific business rule.
3.4.2 Good examples
Which brings me to examples. Examples can also take many different forms. After all, they are meant to capture a conversation. But there’s definitely some standards a good example should adhere to.
Let’s start with the first: a good example has a name that describes the variation of the rule that the example illustrates.
One way Matt Wynne recommended for this was to use the ‘Friends episode title’ structure. “The one where…”
- The one where the phone number is too long
- The one where the phone number is too short
- The one where the driver’s license is not old enough
- The one where a shorter word is shown before a longer one
The titles are not the example, they just give us a sense of context for the example, so we know which case the example is illustrating. There can be multiple example for any rule, and with a descriptive title we can easily understand why the example is there.
As with the rules, the form of ‘The one where…’ is just to help you get to a title easier, but it’s not required. Use what you need to make the title clear and understandable, and you’ll be fine.
The example itself can be written in different forms. It can be in the form of bullets showing different inputs, actions and expected outcomes. It can be a row in a table drawn up for the rule, with different rows per example. It can be a drawing, even, if that makes things clearer.
Concrete
When you discuss and write an example, you want it to be as concrete as it can be. You should use actual examples of data, not just the mention of them. All the data that is specific for a particular case should be in there too.
For a phone number validation, for example, that means you use an actual number:
title: the one with area prefix, which is always ‘0’
example: ‘0612345678’ is valid
title: the one with no prefix
example: ‘6123456789’ is not valid
title: the one with international prefix ‘00’
example: ‘0031612345678’ is valid
title: the one with international prefix ‘+’
example: ‘+31612345678’ is valid
When we show this in the way it would likely be written on post-its during an example mapping session, something like the following figure would be the result. As you can see, the notation is shorter, but it’s clear what each example is about and how it is related to the rule.
Another thing you notice more easily in this form is that there already seem to be two sets of examples for the rule: two dealing with an area prefix and two dealing with an international prefix. An indication that maybe we are actually talking about two separate rules instead of one single rule.
Specific
At the same time, you want the examples to be specific for the rule. For instance, the example titled ‘the one with no prefix’ above does still have the expected number of digits for a non-international number. The rule around the number of digits is another one, and we don’t want to have one example showing the results of both rules!
If we refer once more to the case from our friends at InsAny, where they found the table depicted in Figure - “Different cases for applying for car insurance”, repeated below, we have a number of different examples there that are not specific.
If you’d translate that directly into examples, you would get:
title: license is less than two years old
example:
driver age: 18
license age: 0
co-driver age: 20
car model: Ford F-150
car age: 9
car current value: 7500
incident free: yes
needs manual check: yes
rule: should not automatically accept if the driver’s age is under 25
title: driver is less than 25 years old
example:
driver age: 18
license age: 0
co-driver age: 20
car model: Ford F-150
car age: 9
car current value: 7500
incident free: yes
needs manual check: yes
rule: should not automatically accept if the car’s age is more than 3 years
title: car is more than three years old
example:
driver age: 18
license age: 0
co-driver age: 20
car model: Ford F-150
car age: 9
car current value: 7500
incident free: yes
needs manual check: yes
These very obviously do not make for very good examples: they use the same data, the same example, to illustrate three different rules! The name of the example is the only way we can see what its intention is, and which data that is included might be relevant. The example itself is just too broad.
If we pare down the data to only what is relevant for the specific rule and example, we get a much clearer view of what is happening.
title: license is less than two years old needs manual check
example:
license age: 0
needs manual check: yes
rule: should not automatically accept if the driver’s age is under 25
title: driver is less than 25 years old needs manual check
example:
driver age: 18
needs manual check: yes
rule: should not automatically accept if the car’s age is more than 3 years
title: car is more than three years old needs manual check
example:
car age: 9
needs manual check: yes
Illustrative
Though the last examples are concrete and specific, using real data and only that part of the data necessary for the specific example, they are not yet illustrative. With illustrative we mean that the example needs to show that when a rule is exercised, in which conditions it will result in different outcomes.
title: car is more than three years old
example:
car age: 9
needs manual check: yes
If we take the rule around the age of the car, it is not apparent from the example what the condition exactly is when the age of the car would result in a manual check. If we change the example to be around that condition, and pair it with one that results in the opposite outcome, we know exactly how the rule will play its part.
title: car is less than three years old does not need manual check
example:
car age: 2
needs manual check: no
title: car is more than three years old does need manual check
example:
car age: 3
needs manual check: yes
In this case we see only the concrete data that is relevant for the rule, and see examples of the different outcomes that illustrate when the result will be either one or the other. For instance, it becomes very clear that ‘more than three years old’ can be interpreted as the car age in the system being ‘3 years or more’. A programmer not very familiar with the domain might otherwise interpret ‘more than 3 years old’ as meaning the value of ‘car age’ in years should be higher than 3.
3.5 Organizing Example Mapping Sessions
With a clear picture of what you need to look for in terms of output, let’s go through the form an example mapping session takes and what to pay attention to when facilitating a session like that. Some of the considerations are the same as I’ve discussed in the context of a story mapping session, in Chapter 2 - “Story Mapping” - “Organizing a Mapping Session”, and in those cases I’ll summarize and refer back to that.
3.5.1 The people
As with story mapping, for an example mapping session you need the ‘Three Amigos’, with all the different viewpoints on the system represented. The input of testers and developers can not be overestimated.
The conversation is the thing. Get everyone together.
As mentioned in Chapter 2 - “Story Mapping” - “The people”, the conversation is the thing. As you’ve seen in this chapter, context is very much important to understand the business rules as they are intended. If you don’t connect the dots between rules at a low level and the broader functionality that they serve, you will still not have the necessary understanding of the functionality that we need to confidently implement it.
3.5.2 The schedule
Again, much of the story here is similar to that for story mapping. Especially in the beginning, you will be left with many open questions in your example map. You will have to do additional research, talk to user, stakeholders, subject matter expert, or dive into documentation, requirements of dependent systems, and your own tests and code. This is normal. As you go on, the process will get faster and more predictable.
Because of that, it’s good to do a kick-off session with the team(s) involved, discuss what you’ll do, what you want to achieve, and how you can do research together. That first session should probably be ninety minutes, to allow for discussion and alignment. It’s best to do a first example map as the closing of the session, so everyone gets a practical introduction.
When planning any example mapping session, make sure the team knows the story or area of functionality that you will be looking at. This should normally come from the context of an existing story map. Knowing what’s coming will make it possible for people to do some research, making it more likely you’ll get useful results from your session. Once you get in an established rhythm of example mapping, with the types of research becoming familiar, you’ll get to a point where your team knows what they have to do to get to the bottom of things.
Those regular example mapping sessions themselves can be very targeted and streamlined. For new functionality it is often enough to spend about 25 minutes on the example map for a new story. Teams often do a few in a one-hour session. Some stories go very quickly, and others take more time.
Each session starts with a quick exploration of the story, and its context. This usually means a quick look at the story map, to show how the story you are looking at is part of the whole of the feature. At the same time it shows what you will not be looking at: the other stories on the map will show what is definitely not in scope for this one.
As soon as you have the context clear, the next step is writing any (seemingly) obvious rules and putting them on the board! This does not have to be exhaustive, though. It can’t be! You then go through each rule and create examples for them, adding new rules and questions as we go along. Newly split-off stories are also possible, and it is a decision in the moment to do example mapping for the new stories immediately after the current one, or schedule those for another time. Sometimes you have the necessary context already, and the time to go into it. Sometimes you don’t.
At the end of the session, take another look at the rules, and decide whether they should be re-ordered as separate stories, if the contexts are different enough.
3.5.3 The kick-off - a first example mapping session
The first kick-off session for example mapping with a team that is not used to the practice is meant to make it clear to the team why you are doing this, what the format of the sessions will be, how you’ll be using the results, and how that is going to be helpful to everyone involved.
This means that in the kick-off session we discuss:
- The goal of the practice: To find the details of the functionality of the system and create a base understanding that we can build tests on to allow us to confidently implement the user story.
- The format: Example Mapping, with an overview of an Example Map and its different elements, and the steps you will go through to generate rules and examples.
- The expected outcome: A set of rules, each illustrated by one or more examples, along with concrete open questions and potentially new stories.
- The follow-up: Formulation of the examples, into concrete test cases suitable for implementation (see Chapter 4 - “Formulating Scenarios”).
As we already have a story map, it is then useful to walk through it to show where in that context the initial session(s) will play a role.
Use the previous sections to help describe what the challenges are of building an example map and how you’ll expect some research to happen before each session, and that you expect some questions to still come up for stories that will require further efforts and clarification.
3.5.4 The conversation
When facilitating the conversation during example mapping, it is important to be clear about the way you expect the team to work through the map. One risk that you will always run into when discussing a system is that people get stuck in disagreement and discussion. Make sure everyone understands that you will be cutting discussions short and capture the disagreement as a question on the map: look for data, not opinions, and take the time that is needed for that.
One step at a time.
The flow through the map is meant to be progressive, in that you want to go rule by rule, and move forward through the map. You might start writing a few rules first, sometimes brought by you, or sometimes so obvious that they get brought up in quick succession. But then quickly move to the very first rule and try to generate all the necessary examples for that rule. As soon as you all think those are complete, go to the next rule, and find the examples for that.
Focus.
At no point do you split up the team to work on multiple rules at the same time. That would prevent the shared conversation with multiple viewpoints that you need. It will also invariably result in duplicate work, different styles of examples, and going back and forth between rules as people get distracted. To ensure focus is kept throughout the session, it is very helpful to simply designate one person as the ‘driver’, who writes all the notes. That way, you can make sure everything is visible for the whole team, and everyone agrees and understands the resulting examples.
Move forward: reach consensus when you can, but accept consent when you can’t.
For each rule, and each example, go for consensus in the team that this is a real and proper example for the system. If someone disagrees, explore the disagreement, find its core, and either decide, or raise a question. If necessary, you can accept consent if consensus is not possible: if some of the team does not completely agree, but it’s not possible to get to the core, and they are not against the example, just not for it, you can still go forward with it in place.
Add examples, rules and stories as you go along.
Quite frequently, as you are thinking up examples for one rule, you realize there’s another rule that is part of this story, that you don’t have on the map yet. When that happens, add it. Sometimes you will get some discussion at that point about whether that rule is part of this story, or should be in another one. If that ‘other story’ is not yet on the story map, you simply create it on the fly and put it on the example map. The story map is updated to include the story after the mapping session.
Smaller is better: split where you can.
At the end of the session, we have a number of rules, some open questions, and perhaps some stories to add. At that point I always look at the rules we’ve found and see if there’s obvious clusters of them, of rules that naturally belong together. When there are clusters, you can use those to split the current story into a number of smaller ones, one for each cluster. This is very important for new functionality, allowing you to make smaller pieces of work, and a more iterative approach to building the new functionality. In other words, the more, smaller steps you can define, the more options you create to for iterative and incremental delivery, and limiting scope.
3.5.5 The room
For the room, both physical and remote, the advice for an example mapping session is the same as for a story map, and the contents of Chapter 2 - “Story Mapping” - “The Room” is just as applicable here: Prepare the board, have a single driver that writes the notes, and make sure to record the results.
3.5.6 The record
When we are done with the example mapping session, we have an example map. We may have open questions, in which case we will want to return to the story in some future session. If the map is reasonably complete we take the map and prepare to use the results.
Until you’ve completed the map and processed the results, keep a picture of the map, or store a copy of the digital board, to refer back to. The examples will be processed into a more formal form, as we will discuss in the next chapter Chapter 4 - “Formulating Scenarios”. Once that is done, the example map no longer needs to be kept. Unlike a story map, it is not something we come back to and update, or use to keep track of our progress. The resulting scenarios will end up as part of our system, in the codebase, so we won’t lose the results.
3.6 Priorities
With this relatively lightweight way to discover the type of detailed requirements, in the form of rules and examples, you now have the information necessary for defining and validating, using high-resolution testing, new functionality.
How do we proceed from there? As discussed, we’ll be using the examples we’ve found to formulate scenarios illustrating the functionality, and the development team can use those to base their tests on. That way we can validate the functionality, and ensure it works as expected when we deliver it. Even if you do not use this form of test-first development, using this very concrete description of expected behavior of the system will make it much easier to know what should be build.
In other words, you could decide you don’t need the scenarios formalized. This is not something I recommend, though. Formalizing the scenarios will give you a much-needed feedback loop, focuses you on the domain language, and makes the move to automating these examples as low-level, high-resolution tests much easier. So let’s go on to the next chapter and find out how to go about that. open
4 Formulating Scenarios
The examples you discover using example mapping are useful in and of themselves. They give insight into the details of your business logic. As with other documentation of requirements, though, that is not enough. To go from that documentation to verification of your assumptions about that logi you formalize them into structured acceptance scenarios. That formal representation is necessary for different reasons, but most importantly allows you to implement the scenarios as tests that can be executed against your system to verify that what you implement is indeed the intended functionality.
In this chapter you will learn how to write those scenarios, and what different versions of them you might find in the wild. I will show a clear path from example to scenario, how to organize scenarios and use them to generate system documentation.
As I’ve been doing throughout, I highlight how to prioritize formulation and automation of these scenarios against those for other stories, as well as against further expansion of described user journeys. This chapter is also the last element of the refinement cycle, after which you are ready to start implementation. I’ll discuss a bit of how these scenarios can be used to implement tests. Not because you as a product owner need all the detail around that, but a basic understanding will help in communication with the engineering team, and set expectations on what to expect from these tests.
- When we take examples and formulate them into scenarios, they take on the shape of a test.
- Journey scenarios are good descriptions of a user journey, but not good descriptions of a business rule.
- Illustrative scenarios clearly demonstrate the purpose and functioning of a business rule.
- The ‘Gherkin’, Given/When/Then format is the most popular way to write scenarios.
- Good scenarios are BRIEF:
- Use Business language so everyone can understand them
- Use Real data to make it easy to link to real use
- Are Intention revealing: show what, not how
- Are Essential, in that they leave out anything not relevant to the specific business rule
- Are Focused, in that they only show the functioning of one particular rule
- Are BRIEF, no longer than absolutely necessary
- Automated low-level, high-resolution tests are much easier to write and maintain than high-level, low-resolution ones.
4.1 Confirmation (why)
In the previous chapter I described how to use example mapping to structure conversation around business rules. As you’ve been able to read, it can give you the necessary insight and documentation that is often missing for any legacy system.
If you think back to the three components of a user story in terms of “CCC: Card, Conversation and Confirmation”, we are quite a long way towards having that complete. It can still be cumbersome and time-consuming to use those examples as a basis for manual testing. And not just that. The form the examples take is enough to know the shape of the tests that must be performed, but only if you were part of the example mapping session, and understand the context and form in which the examples were written. We still have not fixed the issue of documentation being of only limited use!
4.1.1 Examples become scenarios
This is where we move on to the next step, where we write down those same examples in a structured and well-defined way. So structured and well-defined that it becomes possible to read those examples from code and use them as the basis of an automated test. The example becomes an acceptance scenario, and can actually be run as tests to verify that any changes made in the code do not change any of the existing business rules.
Over time, there’s been different ways in which examples have been formalized. I described some of them when I discussed all the different forms of requirements one can encounter. Basically, there’s been the Fit/FitNesse table based structure, and the Cucumber/Gherkin formalized language form. Both are formally structured so that, even though they are easy to read, they are also easily used to control execution of a program. A test.
In that way, scenarios and examples are always in the shape of a test. And that shape is also the archetypal form of any computer program: input, processing, output. Programmers have used simple mnemonics to keep that form in mind in the case of tests: AAA, for Arrange, Act, Assert. GWT, for Given, When, Then. The structure is always the same, though:
- Arrange / Given / Input: the test depends on specific inputs to be able to run, and we need to prepare that input before we can run the rule we want to exercise
- Act / When / Processing: the test needs to call the actual logic we are trying to test, otherwise what’s the point?
- Assert / Then / Output: once the logic has run, it gives us a certain result, and we need to verify that we get the result that we expect
This simple structure is the shape of any test. Of any computer system, really. That means it is also the structure of the scenarios we build from our examples.
In the previous chapter I said that the form of the examples you create during example mapping is very much free, and it is. For some purposes people have even used drawings to make the examples quick to write and understand. Now, though, we need to translate that freeform back into a structured textual form. That can be difficult, but is necessary. It also brings advantages. One advantage is that we need to consider language, and build our domain language. Another is that in textual form, we can store our scenarios with our code. We can read them and run them as tests. And in that way we can make sure that this form of detailed documentation is always up-to-date.
4.1.2 Journey Scenarios vs Illustrative Scenarios
You may have had experiences with ‘BDD’ that have left you with unfavorable opinions. The most common reason for that is that very often the form of our scenarios (Given …, When…, Then…) is used to write things that look like the following:
Scenario: Apply for car insurance
Given a logged in user
When the user clicks on apply for insurance
And the user clicks on car insurance
Then the personal details screen is shown
When the user enters their name and date of birth
And the date of birth older than 2006
Then the driver's license screen is shown
When the user enters their license number and date
And the date is more than 5 years ago
Then the car details page is shown
When the user enters the car license plate
Then the car's make, model and age is shown
When the car's age is less than 3 years
Then the insurance package offer is shown
When the user clicks accept to the offer
Then the payment details page is shown
When the user selects credit-card payment
Then the credit card payment flow is run
When the payment authorisation is successful
Then the insurance package legal documents are shoen
And the application is confirmed
And the insurance packages overview page is shownIf you’ve made it through the book this far, you probably recognise the above as an example of a high level, low resolution test. It’s very much the shape of what we have been discussing as ‘user journeys’. Every time I have discussed those user journeys, I’ve made a point of emphasizing that there can be good reasons not to automate them. Or at least not to create too many of them.
These sort of Journey Scenarios tend to be implemented through the user interface, and expect a full, running, system to be there. They are expensive to create, due to the complexities of controlling a user interface from code. They are also expensive to run, due to the cost in time and money of creating a full running environment with just the specific data that we need, a client, and the slow response times from such a full system.
Mostly, though, they are unreliable and imprecise: if a test like that fails, there can be many different reasons, because so much code is executed as part of the test! When we have a test failure, there’s a good chance the actual logic we were trying to test was never actually run!
In contrast, our examples from the previous chapter were all very lean and targeted: there’s really not much information in them that is not particular to the very rule that we are interested in. We call those examples, or at least the resulting scenarios, Illustrative Scenarios, to emphasize that they illustrate some specific business logic.
A possible illustrative scenario that is touched upon in the journey above is:
Example: A driver's license older than 5 years is accepted
Given a driver's license that is 6 years old
When the license is used to apply for car insurance
Then that application can be acceptedIt’s immediately clear what this scenario is supposed to illustrate, and test. This is the type of scenario we are interested in to document business rules. Importantly, this is also the type of scenario that can be implemented (automated) in a way that just exercises the code that we are interested in, the code that implements the specific business rule, while leaving all the other code untouched. That means it can be easier to implement, and much easier, faster and cheaper to run.
Example: A car less than 3 years old is accepted
Given a car that is 2 years old
When a user applies for car insurance for that car
Then that application can be accepted4.2 The formal shape of a scenario
To show the form of a scenario, let’s see what the formal representation of our examples need to look like. And what variations of it we have that can make our lives easier.
The ‘Gherkin’ format uses the shape of a scenario as described in the previous section, but there’s different parts of the format that we can use to stay clear and concise. Especially when we are writing scenarios for existing functionality this is important: we often have a larger number of different cases that we need to represent. But let’s start with the basics.
4.2.1 Given, When, Then
The basic shape of a Gherkin scenario is as follows:
Example: The one with the international prefix '+'
Given a phone number entered as '+31612345678'
When the number is validated
Then that number is seen as validThe importance of the title, as I emphasized in the previous chapter, returns with the ‘Example’ heading. Gherkin allows that heading to be called ‘Scenario’ as well, which was its original name, but nowadays, it is recommended to stick with ‘Example’. As before, the title is important because it makes the intent of the scenario clear. Without it, even short and concrete scenarios can be difficult to interpret.
In the ‘Given’ section we give the setup for the example. That means that all the input, or preconditions, for the scenario, are in the ‘Given’ section. In the case of the phone number, there is only one input, so the ‘Given’ is very simple. A more complicated rule could require more different inputs, though, and you can either include those in the same line, like this:
Given the country code is '+31' and the phone number is '612345678'or spread the different inputs over multiple lines, like this:
Given the country code is '+31'
And the phone number is '612345678'We can have as many ‘And’ lines as we want. Those lines can also start with a repeat of the ‘Given’ keyword, but ‘And’ simply reads more naturally.
The execution of the business rule’s logic is described in the ‘When’ line. Contrary to ‘Given’, there can only be one of these. There is only ever one action in a scenario. To make sure we are very clear about the difference between ‘input’ and ‘action’, we do not provide any input data with the ‘When’ action.
When the number is validatedMost of the time, we can link the ‘When’ action to a trigger by the user. We could, for instance, rephrase the above to the slightly more generic:
When the user submits their phone numberThere is a risk of this becoming cumbersome when the action the user performs is very far removed from the specific business rules we are exercising. My preference is for specificity over an adherence to a link to a specific user action. After all, the phone validation could be called from different screens, both automatically while the user is typing in a number, or in the back-end when validating a complete set of provided data.
The advantage of centering the action around the user is that it can be formulated in an active voice. ‘When the user submits’, not ‘When the number is validated’. That makes the step easier to read and understand, so I prefer to use that whenever possible. In this case the phone number validation is quite far from a user action, and I opted to omit the user from the scenario.
The validation, checking for expected results, happens in the ‘Then’ section. Like ‘Given’, there can be multiple of these, but we do prefer to keep this as short as possible. The ‘Then’ often does include specific data to check, even if the results we check for can simply be a ‘yes or no’ response.
Then that number is seen as validA frequently seen example of multiple results is when a negative response is accompanied by a specific message.
Example: the one with no prefix
Given a phone number entered as '6123456789'
When we validate that phone number
Then the number is not seen as valid
And the user is given a message telling them to 'add a prefix'For both the ‘Given’ and the ‘Then’ it is important to keep in mind that while you can have multiple lines, we do not want multiple lines if we can possibly do without. Short and simple is better, but more importantly relevance to the specific example is crucial. We’ll go into more detail around that in the section “A good scenario is BRIEF”.
4.2.2 Tables
The data can be complex simply because there is a lot of different values that together make up the input of an example. There’s another case that can make it more difficult, though, which is when there are multiple of the same type of value as input. For instance, when you need to approve someone for a car insurance, they might need to search for car brands, and there is a scenario for the search function’s rules.
Example: the one where just models of the wanted brand are found
Given the following available car models
| Brand | Model |
| Toyota | Prius |
| BMW | X3 |
| Toyota | Camry |
When the user selects the brand 'Toyota'
Then 2 models are foundNote that while we do provide a list of car brands, we do not provide an exhaustive list of car brands. There’s just enough data to show that a different brand is not returned.
Tables like these can also be used in the ‘Then’ but as with ‘And’, it is not recommended to have multiple outputs. And as with ‘And’, there’s exceptions to that rule. If you wanted to show that the specific models are found, you could write this as follows:
Then the following models are found
| Prius |
| Camry |4.2.3 Outlines
When you are writing many different examples for a rule, the scenarios can look very similar with the same variables provided as input, albeit with different values. In those situations, it can be a very attractive idea to combine different examples.
Let’s take the following list of similar scenarios.
Example: the one where the driver is less than 25 years old
Given the driver has Driver Age of '24'
And the driver has a Licence Age of '5'
When the driver applies for car insurance
Then the application does need manual approval
Example: the one where the driver is 25 years old
Given the driver has Driver Age of '25'
And the driver has a Licence Age of '5'
When the driver applies for car insurance
Then the application does not need manual approval
Example: the one where the driver is older than 60
Given the driver has Driver Age of '60'
And the driver has a Licence Age of '5'
When the driver applies for car insurance
Then the application does need manual approval
Example: the one where the driver has not had a license for 5 years
Given the driver has Driver Age of '25'
And the driver has a Licence Age of '4'
When the driver applies for car insurance
Then the application does need manual approval
Example: the one where the driver has had a license for 5 years or longer
Given the driver has Driver Age of '25'
And the driver has a Licence Age of '5'
When the driver applies for car insurance
Then the application does not need manual approvalIt would be possible to write this in a shorter form.
Scenario Outline: driver and license age rules
Given the driver has Driver Age of <driver age>
And the driver has a Licence Age of <license age>
When the driver applies for car insurance
Then the application <approval> manual approval
Examples:
| driver age | license age | approval |
| 24 | 5 | does need |
| 25 | 5 | does not need |
| 60 | 5 | does need |
| 25 | 4 | does need |
| 25 | 5 | does not need |That saves a lot of typing, doesn’t it? But if you look at the second version, it should be clear that some information is lost in this form. Where the separate examples each had useful and descriptive titles, you do not have that additional context in the outline. In fact, though the shape of the data is the same, there’s really two different rules being exercised.
It becomes much better if we split those two.
Scenario Outline: driver age between 25 and 60 can be automatically approved
Given the driver has Driver Age of <driver age>
When the driver applies for car insurance
Then the application <approval> manual approval
Examples:
| driver age | approval |
| 24 | does need |
| 25 | does not need |
| 60 | does need |
Scenario Outline: license less than 5 years need manual approval
Given the driver has License Age of <license age>
When the driver applies for car insurance
Then the application <approval> manual approval
Examples:
| license age | approval |
| 4 | does need |
| 5 | does not need |In this way, the title of the ‘Outline’ gives enough context to understand the meaning of the different examples in the table. And the outline only has the data needed to illustrate the examples that are attached to it. In the combined version, even though both ‘Driver Age’ and ‘License Age’ were provided, each was only relevant for a subset of the attached examples, and never in combination.
When you create tests for existing functionality, you will often run into this sort of situation. There’s already many different rules implemented, and the input to a part of the system’s logic will often have many variables. Some only relevant in isolation, others in combination (what if approval were automatic if either one of the driver or co-driver’s licenses was 5 years old?). We are trying to determine not just all possible different combinations of inputs and outputs, but we want to know the reason for specific results.
Make the link between the data and the rule being illustrated clear.
That is a different goal than just making sure that all known inputs for an old system result in the same results in a new one. That is also a useful check, and very necessary if you just want to do a one-to-one replacement. But it won’t help you get in real control over a legacy system. Even in a completely new system you won’t be able to change its functionality freely, because you won’t know why it gives certain results.
Not breaking things is not enough, we want to know what we can break when we want to.
So, even if you have an existing and complex system with many inputs, you should only use scenario/example outlines to combine different related examples. And even then, only do that if the table view actually makes it easier to understand how this rule works. In the above examples, the ones around the driver’s age actually become easier to read, as the ranges are immediately clear in the table. That means this was a good use of an outline. Even though the same is somewhat true for the license age examples, the fact that there’s only one point of change means that perhaps the version as separate scenarios was just as clear or clearer. There’s no hard rules, here, but ease of understanding for a human reader is always the top priority.
Readability always trumps ease of writing, or automation.
As you may expect, if you implement these examples as tests, it can be simpler to implement the test code when we just have a single ‘Outline’. Resist that temptation. The ease of looking up specifics for any rule as a human is much more valuable than avoiding setting some default values in test code.
Background
Another way to deal with repetition in examples is by using the ‘Background’ keyword. This can be used if you find that it’s hard to keep the ‘Given’ section focused, because there’s always some additional setup needed that is necessary for each example, and does not vary between examples. That usually means it’s just not illustrative for the individual examples. You can move that sort of dat out of the documentation and into the implementing test code. But if it does give useful context for the people reading the examples, you want to keep it in the gherkin, but avoid repeating it.
Rule: Co-driver age and license age can force manual approval process
Background:
Given the primary driver would be automatically approved
Example: co-driver needs to be over 25 years of age
Given a co-driver has Driver Age of 23
When the primary driver applies for car insurance
Then the application does need manual approvalBackground can be used at the level of a ‘Feature’, as well as that of a ‘Rule’.
4.2.4 Structure
With that, I have described all the most important parts of the format you write scenarios in. But there’s a number of other parts of the Gherkin ‘language’ that help organise the examples and build them up into useful documentation of your system. The term ‘living documentation’ is often used to describe this, referring back to the fact that if you do match these scenarios with executable tests, there’s always a check on whether the documentation is still an accurate description of the system.
For documentation to be useful, though, it needs to be structured so that you can find what you need in it. As you can imagine, the number of examples for a moderately complex system will quickly become very large. Finding anything in there can become an issue, so there are helpful structural elements to create that structure.
Feature
The top level element for structure is the feature. Since gherkin is stored in ‘feature files’ (files with the extension .feature), it probably comes as no surprise that there’s only one ‘Feature’ heading allowed per file.
The ‘Feature’ keyword allows you to specify a name for a set of functionality that belongs together, and room to have a more extensive description as free text under it.
Feature: Acceptance of car insurance applications
Car insurance applications get accepted automatically in
some circumstances, but require a manual verification in
other cases. They also get declined automatically in
some situations.
[...]The description can be as long as you want. When creating examples for new functionality, it’s seductive to see ‘feature’ as synonymous to ‘story’ and you’ll find a lot of systems that have feature files with the rules and examples belonging to one story. This is unlikely to give you a useful structure for your documentation. Stories should be fine-grained, and as such you’ll need many stories for one feature.
Another way to approach this is to use the user journey as the organizing principle, and use that as the ‘Feature’. That can work, especially in the beginning, but you will find that these feature files will become unwieldy, and more importantly that the rules and examples that end up in them don’t have much in common anymore. Split the feature as you progress and find more examples. The simplest way to go about this is to notice which rules are centered around a common theme, and separate those out.
Rule
Since we’ve been heavily focused on business rules it may come as a surprise that the ‘Rule’ keyword was only fairly recently added to the Gherkin language. It serves only as a structural element, allowing us to group sets of examples that belong to one rule.
Feature: ...
Rule: driver age between 25 and 60 can be automatically approved
Example: driver under 25 years of age is not automatically approved
Given: ...
Example: driver over 60 years of age is not automatically approved
Given: ...
Example: driver between 25 and 60 years of age is automatically approved
Given: ...
Rule: license less than 5 years need manual approval
Example: ...This maps back directly to our example map, so it should be easy to use.
Example / Scenario
I’ve gone into plenty of detail about the examples already. These also come directly from the example map. Though newer versions of cucumber prefer the ‘Example’ keyword, ‘Scenario’ is considered a synonym, and was the original name for the section and thus can be frequently found in existing feature files. There’s no need to change that, but I do like to be consistent.
Tags
All the structural elements so far have been strictly hierarchical. The system has features, features have rules, and rules have examples. There’s often reasons to have more than one way to organize features, though. For instance, the phone number rules we encountered are used within different user journeys, wherever a phone number needs to be entered. We might have them together in a separate feature file, and annotate them with a tag to link them to all places they are used, or simply to categorize them as input validation rules.
@input-validation-rules, @apply-for-car-insurance, @apply-for-home-insurance
Feature: Phone number validation
Rule: phone numbers are prefixed with an area selector or international prefix
@legacy
Example: the one with the area prefix, which is always '0'
...This will allow us to easily find different related rules, indexed by whatever criteria make sense.
4.3 A good scenario is BRIEF
To help keep scenarios useful for the purposes for which you want to use them, documentation and verification of the business logic, there’s some useful rules to keep in mind. Seb Rose coined the acronym BRIEFRose and Nagy, Effective Behavior-Driven Development.
to help remember those rules.
BRIEF stands for:
- Business language
- Real data
- Intention revealing
- Essential
- Focused
- Brief
Let’s go through those in detail.
4.3.1 Business language
For the scenario to work as documentation, and as a basis for communication between engineering and product, the language of the scenario needs to be in the domain language of the business. That may seem obvious, but can be difficult in practice. The scenario should be readable by someone in the business, and all the terms should be clear and unambiguous for them.
That means you do not want to see any terms in there that are specific to the existing implementation of the scenario, unless those are direct representatives of the business language. One of the difficulties you might run into is that some of the concepts the scenario deals with may be in common use, but have not been given a distinct name yet. When developing new functionality, that is a very frequent occurrence, as the new functionality may be introducing new concepts to be named. When it happens for existing functionality, the names may have been lost in time, or simply never been given, and you will have to define them.
Naming things is hard. Define your domain language.
It is quite normal to run into such issues, and starting a dictionary of the domain language is a good practice at this point.
Another way in which the language might become an issue is that terms are not defined specifically enough. In the example, I talk about ‘License Age’ as well as ‘Driver Age’. Using just ‘Age’ somewhere would be confusing. The same can be true for terms like ‘address’ or ‘language’: if we need to rely on context to understand what the term means, perhaps the term is ambiguous and needs to be defined in more detail, as ‘work address’, ‘invoice address’, or ‘delivery address’, for instance. You developers might be very familiar with that concept, and use terms like ‘ubiquitous language’ or ‘Domain Driven Design’ when these discussions come up.
4.3.2 Real data
For the scenario to be able to resonate as an example, it needs to show what is actually being processed. That means using real data, not just references to data. This means that the scenario will not, for instance, say that a ‘phone number with more than 10 digits’ is passed in but instead use an example of such a number, ‘01234567890’.
Use relevant data, not available data.
In these cases, it’s important to note that while we use real data, we don’t necessarily use all the data. Just what is necessary to show what is relevant for this particular case. We also don’t use ‘real data’ in the sense that we expect data that is not defined in the scenario to already exist somewhere in the system and just reference it.
4.3.3 Intention revealing
A scenario that acts as documentation is absolutely useless if it doesn’t clearly show how the business logic it’s documenting is supposed to work. Note the use of ‘business logic’, here. The business logic, and the language in which it is expressed, talk about what goals the system is supposed to achieve for the user. The intent. And talking about the intent, those goals, means you do not talk in terms of incidental details, such as UI elements (“click a button”). Our earlier example in “Journey Scenarios vs Illustrative Scenarios” showed how much clearer the scenarios became when they talked explicitly about the logic, and not the place in the flow the logic occurs.
Intent is all. Leave out incidental details, like describing UI interactions.
4.3.4 Essential
In the same way, scenarios are only clear documentation if they don’t confuse the reader with things that are not relevant to the specific business rule that is being described. Only that which is essential to understanding that rule should be part of the scenario. We’ve seen examples around phone numbers in this book. When the user enters that phone number into the app, they also add their name and address on the same screen. That accidental combination in the user interface has no impact on the validity of the phone number itself, so we should not see that name or address in the same scenario that only deals with rules around the phone number.
In legacy, filtering out the non-essential is difficult but absolutely necessary.
4.3.5 Focused
Each scenario should always exemplify exactly one business rule. We need to be able to discuss it without getting confused by things that are part of another rule. If we have a scenario dealing with the number of digits in a phone number, changing which non-digit characters are allowed in a phone number should not have any impact on it.
4.3.6 BRIEF
Self-referential, but important: scenarios should indeed be brief, short. The elements above will contribute to that, but a helpful rule of thumb is keeping them to a short 5-6 lines. If we need to spend too much time reading scenarios to understand them, we will most certainly misunderstand them. And most likely not even begin reading them.
4.4 The way to write a test
The way a scenario is created has a direct impact on how we can automate their execution. A focused, limited scope scenario using real data will be easily readable as a test. Using tools such as Cucumber it is easy to turn those types of scenarios into tests, with the description, input, action and expected result specified explicitly in the scenario.
Many development teams struggle with these tests, and fall into patterns of implementation that are too expensive to make and run. I discuss the pitfalls of those implementation patterns in my book on dealing with legacy systems “The Product Owner’s Guide To Escaping Legacy”Lagerweij, The Product Owner’s Guide To Escaping Legacy.
. The type of tests that we prefer are low-level, high resolution tests.
There’s plenty more to discuss around testing in general, but the purpose of the tests we are talking about here is to function as documentation and validation of business rules. The many other types of testing, that are much more linked to the architecture and implementation of the system, we leave for other books, such as “Agile Technical Practices Distilled”Moreira Santos, Consolaro, and Di Gioia, Agile Technical Practices Distilled.
and “Developer Testing: Building Quality into Software”Tarlinder, Developer Testing: Building Quality into Software.
.
4.4.1 Implementation patterns and concerns
Now that you know what a scenario looks like and what is important for writing good scenarios, it is helpful to know a little about how they can be implemented as tests and what sort of issues your teams might run into when doing that. As a product owner, you won’t be the one doing that implementation of scenarios, of course, but it is important to have an understanding of how that works to be able to guide your teams.
A clean implementation of a scenario
When you are creating new functionality in a fairly clean system, scenarios can be implemented in a very light-weight fashion. In fact, as teams start working with scenarios a fairly common complaint you hear is that they see too much overlap between implementing the scenarios and writing unit tests. That is not an accident. As you could read in Chapter 3 - “Example Mapping” - “Behaviour Driven Development”, these sort of scenarios, or customer tests, were originally the starting point for the first unit tests developers would write. That is not to say they are the same. A developer will likely write many more unit tests than scenarios for a particular piece of functionality, and not all unit tests will directly relate to specific business rules. There’s some overlap, but a different focus. For instance, the unit tests would include negative tests such as when the input is completely missing. That is not a situation that you would see as part of the definition of the functionality, but is much more focused on the internal correctness and robustness of the code.
There is very little effort in using scenarios if the corresponding unit tests are already there. After all, you can re-use what is there.
A high-level view of implementation
As a product owner, there’s no need to know all the details of how these human-readable scenarios are also used as automated tests against the code implementing the business rules. It does help to have a high level view of how that works, however, so you can understand some of the particular difficulties of doing this in a legacy system. Feel free to skip this if you haven’t run into any of those, and come back when you need more background.
When we write a scenario, there are specific words that have meaning for the system interpreting the scenario.
Example:
Given
When
Then
AndIn the above, the words Given, When and Then, as well as And are signals to tell the system that these are lines that need to be executed as steps in the scenario. The Example only signals that the following lines belong together, until the next Example line.
For each step, each executable line, the text that follows the keyword determines which commands are executed.
Example: The one with the international prefix '+'
Given a phone number entered as '+31612345678'
When we validate that number
Then that number is seen as a valid numberEach line in a scenario links directly to test code.
The code that uses this scenario to drive a test, usually called a ‘step definition’, uses the text in the scenario to trigger some piece of logic to make the right step happen in the system. It does that line by line. That means that a line such as "Given a phone number entered as '+31612345678'" is linked to some specific test code that actually passes that phone number to the system. If that test code is called `setPhoneNumberToValidate(), that can look like this:
@Given("a phone number entered as '+31612345678'")
setPhoneNumberToValidate() {
...
}This gives us a simple way to ‘read’ the scenarios and couple them to code. It’s an intentionally simple system that does not require much to run quickly and efficiently.
Now that we have linked the scenario to code that can perform what is described in the scenario, we still need to actually write the code that does that. This is where the effects of a legacy system can be felt. Creating the test can become highly complex and time-consuming.
A unit by any other name
When we implement the example scenario for a new system, each step in the scenario can usually be implemented using only a few lines of code.
If the feature we’re building is created within the context of an existing system, and if in that system it is perhaps not very easy to separate the different parts of functionality, we often say we are dealing with legacy code. In legacy code, that same step in the scenario might take many pages of code to set up enough of the system to be able to validate a phone number. Don’t worry, I will not put an example of that here. That would mean including many pages of code in a book not aimed at programmers, and I’ve been told that is bad for sales. Instead, let me try to convey where some of that complexity comes from in a legacy system.
Imagine that the way the program works doesn’t allow us to validate a phone number in isolation, but only as part of an address. And that an address can’t be created unless it is attached to a person. And a person can’t be created unless it is linked to a business, which also needs its own address as well as an account manager, who is also a person, but an internal employee, and can’t be created without at least an address as well as a manager, which is also a person, and… You can see how setting up the system for the test involves a lot of data, and each link necessitates a bunch of code and data being set up in the test.
For want of a nail… very often the test is not written.
Getting the result of the business rule can be just as challenging. The validation of the whole address fails, but we can’t get the reason it fails, which might not be due to the phone number being incorrect but instead because an external service checking the street name against the postal code is not responding. There could be obscure database errors that need to be interpreted, because in this system, these validations are only performed in the database. To work around that, the system may actually implement the validation rules in multiple locations, one nearer the front-end to give useful errors to the users, and one in the database. Those rules might not be entirely the same, and you need to know what the effective business rules are for the combination.
At the same time, due to the way validations are implemented, to be able to run all that code a database needs to be running next to our code, and the only way to access the database from the code means the code needs to be deployed on an application server, which needs to be configured for the database access, but also needs to have access to an email server, a couple of other services that are not related to this functionality, and needs a license that has a high cost per CPU so that it can’t just be run by the developers on their own workstation.
It’s surprising, perhaps, that even with all those hurdles to overcome, writing a test at this lower level is still, eventually, cheaper and faster than writing more end-to-end, low-resolution tests. It is, though.
More difficult, still better.
There are good books out there about how one can start this sort of testing and use it to carefully insert new entry points into the system and extract pieces of functionality, such as our phone number validation, in such a way that they can be accessed in isolation. That sort of change is exactly what is needed to improve the system, making it easier to both test and change code that is specific for specific business rules.
Doing these sort of improvements is initially difficult and expensive, and will get easier and quicker as the development team gathers experience in it. And of course, as the code improves. That initial cost is another reason it is so important to be careful when prioritizing which functionality to improve. The payback can be significant if we pick the right parts, but an investment, such as writing that test, must be made.
Unit-tests test that a piece of code does what the programmer expects it to do.
A side effect of the code becoming more structured, and elements becoming accessible and thus testable in isolation, is that the unit tests for that part of the system also become simpler and easier to write. A common comment from developers is that the scenario can also be easily implemented as a unit test. That is true. It won’t give the same type of documentation and communication that you will get from the examples as we’ve discussed. But it’s still useful. Mostly, though, unit tests should be more focused on a different goal. The scenarios are the behavior we expect from the system. They test that it does what we think it should do. Unit tests should test for ways in which we might have made mistakes writing the code. They test that the code does what the programmer expects it to do. One can be seen as a subset of the other, but they have very different purposes, and should end up with a different set of tests.
Implementation is translation
The implementation of scenarios is quite literally a translation process from the natural, business domain language used in the scenarios into the way those steps of a business rule are executed within your system’s codebase. As discussed above in “A high-level view of implementation”, there’s a big difference between the effort needed to implement a scenario in a clean and well-structured codebase, compared to a legacy system. It’s good to take a step back and think about that. There’s a lot to be said for the view that this difference, this greater effort needed to translate, is an actual indication of having a legacy system. The further away the structure and language of the code is from the language and actions of the business domain, the more difficult it is to understand how to take changes in that business domain and make the corresponding changes in the code.
The inverse is automatically true: the easier we make it to implement scenarios, the better suited the code in the system becomes to accommodate changes. And the better we understand how to translate from the domain language to the system’s code, the better we are able to add to that code when the domain evolves. The implementation of the scenarios is documentation for how the system is linked to the domain. And can be a blueprint for simplifying the system.
That also works in the other direction: starting with scenarios, and writing them first, automatically encourages the engineers to keep that translation simple, and by extension to keep the system as close to the reality of the business as possible. In that way, by being involved in the process of example mapping and validation of the formulated scenarios, product people play an important role in keeping legacy at bay.
As fast as we can
That translation can, as we saw above, include compromises in how focused we can stay in the implementation of the scenario. And those compromises have consequences in how much data and infrastructure we need available and how much unrelated code is exercised when running a scenario. That means there are consequences for the complexity of creating the tests, but also consequences in how fast those tests can run. Every additional component needed, database or filesystem accessed, means a considerable slowdown in the speed with which the test can run.
The simpler, the faster, the better.
This is a balance to keep working on. The simpler the tests are and the faster they run, the better it is. But the improvements to make that possible need to be created iteratively, just like everything else we have been doing as part of this process. Each step adds value, and each lets us learn how to approach the next step.
4.5 From discovery to planning
This process will allow you to structurally break down work into smaller, iterative and incremental deliveries, and ensure that what is delivered is indeed what you expected. Because of that, it also allows for much more flexibility in deciding what you deliver when, and manage the scope of that delivery in much more detail. The last chapter goes into more detail on how you use that extra flexibility while still communicating in clear terms of planning and risk to a perhaps not fully agile organisation.
5 Planning
Planning. The one aspect of the work of a product owner that is discussed in detail everywhere, perhaps. I’m still going to give you my take on it, here. One that is a little different from the normal story. And, of course, a version that talks about how to deal with the uncertainty that comes with working in a legacy system. You already know how to get the parts of the system that you’ll be working on under control. Here, I’ll explain how to mix that work with new development work so that you can limit the risks and uncertainty.
There’s an interesting confusion in our industry that confuses planning with setting deadlines. I’ll take particular care to keep those two separate. A plan is needed to find the possible options for achieving a goal, delivering some functionality. Any sort of target dates are just input that help you decide which of those options to choose. And of course, plans are not predictions. Circumstances will change, needs will evolve, and Murphy will chime in. That’s why you think through all those options, so you can easily react to that changing environment.
- Planning is anticipating on the unknown, and deciding on the safest path.
- A plan is only useful if it helps to make scope decisions during its execution.
- You can vary scope by leaving out features and by delivering features at different levels of fidelity.
- Estimation is necessary but you need much less detail and accuracy than you think.
- You can build a plan using the same tools you have already used: a story map and its slices.
- Mixing slices for getting control over legacy systems and new functionality is necessary to limit uncertainty.
- For both new functionality and for control improvements you need to make the steps as small as possible.
- Dependencies can derail any project:you need to eliminate them as much as you can and then build the plan around them.
- Combining the three elements of control improvements, new functionality slices and dependencies together gives you your plan.
5.1 Planning is charting a path through the unknown
If you know everything that is going to happen, exactly which steps to take, all the illnesses and unexpected changes that will occur, all the ways the outside world will change our situation and in minute detail how you need to build whatever you want to build, planning is easy.
Planning is easy, if everything is known in advance.
This is a fairly uncommon situation, though. I’ve certainly never encountered it. For me, it can break down even when I’m just trying to make a toasted ham and cheese sandwich. And I do that fairly often! But I will find out I’m out of bread, but luckily there’s some in the freezer which I can thaw. That only takes a minute extra. But the ham might be off? The date is yesterday. It seems to smell OK. Oh, and I need to get some cheese from the pantry, because someone finished the last one yesterday and didn’t replace it. Then, I can’t find the cheese slicer. Ah, it’s in the dishwasher. Let’s wash it by hand. Then, the table grill is not where I expect it to be, likely because one of the kids put it away in the wrong cupboard. One of the hotplates is missing. I revert to just using a frying pan. After I wash it, I have one, but now I need some butter. And a knife for the butter. Once I have all this settled, I can finally start making the sandwich.
I slice the cheese. I smell once more, but the ham really does seem fine. I put the pan on the heat, and let the butter melt. I assemble the bread, cheese and ham while the butter melts, and put the sandwich in the pan. I can’t walk away from this process, as I could’ve done if I’d been able to use the grill, so I spend a few minutes moving the toastie around in the pan, and turning it over until it’s nice and ready. The cheese is not quite as well melted as it would have been if I’d used the grill, but all-in-all it’s a pretty good lunch. It took me 20 minutes to make it, instead of 3 minutes preparing it and doing something else for the 6 minutes it would’ve needed to be in the grill. On the other hand, even though it’s a little more fatty, the butter toasted bread does taste a little better.
Then again, if I’d been in a hurry I would have probably just made a peanut butter sandwich. Three minutes to make and eat.
Everything is so much better if you’re not in a hurry.
Things are hard to predict. Especially if you also have to deal with other people doing things in the same area, and maybe using the same resources. If you feel like you need a plan with a lot of detail and that then everything should be done exactly according to that plan, you will probably be disappointed.
5.1.1 A plan is not the road, it’s the whole map
That does not mean that planning is not a useful activity. In fact, it is very important that people know how to react when something unexpected happens. When we say we are making a plan what we are doing, what we should be doing, is charting the different paths to our goals. That includes exploring different ways of reaching that goal, the reasons for going in one direction above another, and most certainly the options for navigating around some of the more traffic-jam sensitive areas. If all we do is print out a linear list of directions, with no consideration of changing circumstances and the corresponding contingency plans, then what we are doing is not planning, but simply hoping everything will be easy. And it never is.
Again, what is important when reading the above is that there is very little said about time in the discussion so far. There’s no deadlines, no milestone dates and no estimations. We will get to some of that later and I’m not trying to tell you that time is not a factor but time is only one input to the process of planning. If you walk around some organizations you might get the impression that mapping estimations of work to a timeline is all there is to planning. In fact, it should only happen for some specific dependencies and critical work. For the rest, making a ‘planning’, as it is often seen, as this timeline, or gantt chart, or burn-up chart, is only a temporary visualization of the current expected path towards reaching our goals. Since that is a very linear view, it does not reflect the complexity of our thinking. It can’t reflect insights into risk and uncertainty, or our contingencies in the face of those. That makes it probably the least useful planing artifact you can think of! Moreover, you could say it is actively harmful if used in communication around your project, as people not familiar with the project will get the impression that it is simple and quite literally straightforward.
First, make a plan. Then, maybe, produce a planning.
A plan, then, is the careful consideration of how you are going to reach your goals. It includes the things you know, including different variants of the functionality you want to build, a rough sense of the cost in time and effort, knowledge of other things going that might help or interfere, and real, externally determined relevant dates.
Your plan should allow you to look into the future and make some decisions. There will also be decisions you can’t take yet, but can see coming. For those you could already think through what the options will be, and even which you would choose based on what the conditions are when the time comes. There might be some fixed points, but also plenty of things you are not able to predict up-front. You have to review your plan as you go along. Making a plan is great, but in the end you need a SatNav for your project where new conditions allow you to change the direction.
– Dwight D. Eisenhower
5.1.2 What are we planning?
I’ve been talking about ‘the goal’, but what is that goal? Though there are likely business goals and user results underneath, in practice that is usually translated into delivery of a new product or new feature. In most situations once we know what feature we’re building, we split that up into smaller parts. There can be different names for those parts, but many now use ‘epic’ for larger parts of a feature that need to be built. But we keep breaking things down in smaller parts, usually ‘stories’.
I know that stepping over the business goal, or user value, so carelessly is risky. And that goal should of course play an important part of your decisions on priorities. But I know you already know that, so let’s take it as a given and focus on the more mechanical parts that give you the opportunity to make those priority calls.
Using a story map as the base of our plan
Keeping in mind that a good plan needs to give you options for different paths to take, the way you split the larger piece of work up into smaller steps is important. In fact, it’s probably the most important part of creating the plan. If you’ve read through the rest of the book, you will not be surprised that my preferred way of decomposing the work is by using a story map. The slices of a story map are a natural way to split up the work, and can be used to plan releasable increments as well as individual sprints/iterations. That is not the plan, of course. It is just the building blocks that will allow you to look at the whole and the parts, and see the possible steps towards a whole, that we need to be able to build a plan.
For the purposes of this chapter, I’ll use the name ‘feature’ as the thing the story map is about. If you recall from Chapter 2 - “Story Mapping”, the story map is usually based on a major flows through an application that allows the user to reach a particular goal. A story map for a feature contains many different stories, organised based on both the step in the flow they relate to, as the slice they belong to. Each slice is a significantly different variation of that flow. I will use slices as the basis for epics. Each epic delivers either new possibilities to the user, of new fidelity to existing functionality.
Fidelity
This is quite an important topic when we talk about planning, so let me stop for a moment and go into the idea of fidelity. When you build a part of a system you usually think about that as making it possible for the user to reach a goal. Say, applying for an insurance package. But it can also be that you make the process of reaching that goal easier, faster, simpler, better in some way. As an example, let’s say that the user can select an insurance package. One way could be that you simply present the user with a list of all possible packages. That may be a long list, and maybe not terribly descriptive, but it will allow the user to get through to the next step of the process. It would be much better if we can guide the user through the selection of the right package. Maybe by asking a few questions, or maybe by using information we already have about the user. That would have the same result: the user selects a package. But it has a higher fidelity: it’s easier, quicker and less error-prone.
– Karl Scotland, “Fidelity – The Lost Dimension of the Iron Triangle”
Thinking about fidelity for new functionality is the most important practice there is when it comes to planning. We tend to automatically think about reducing scope in terms of those goals a user can or cannot achieve with the product. Adding fidelity to that mix allows for much more flexibility in planning, giving more options to chart that path to a successful project.
Stories? Where we’re going we don’t need stories
Though the story map has, by definition, stories in it, and we use stories to determine the slices of both increments (reaching goals) and iterations (fidelity), we do not really need much detail for those stories. They are just reminders of what we think the scope for a slice is. As I go through the process of building a plan based on our story map, you will find that we do not really need the stories as part of that plan. You’ve already learned that you can use slices to represent a time-box, and that each slice has a name that represents the value it delivers. That level of detail is enough to forge our plan. Stories come back into play when we actually execute on the plan and refine the details we need to implement them. Later.
Slices are the building blocks of delivery, and the building blocks of your plan.
5.1.3 Estimation
That brings me neatly to the topic of estimation. A hotly contested topic in the world of projects and software development. Estimation is considered important. It’s important because we want to be able to make decisions about scope and risk based on what building that scope is actually going to cost. It’s important because we do sometimes have deadlines that are determined by outside factors such as seasonal sales cycles or events. It’s important because sometimes we are the dependency and we need to deliver something to allow others to continue their own work. It’s important because, frankly, uncertainty makes us humans very uncomfortable.
Estimation helps you make decisions. Very rarely do you need a high granularity for that.
At the same time, as the saying goes, it is difficult to make predictions, especially about the future. We need an indication about what could be feasible to create in a certain amount of time, against a certain amount of cost, but there’s only so much certainty we can ensure. I am not going to spend time in this book on the discussion about what level of accuracy of estimation is enough. Different circumstances have different requirements. Different techniques can deliver different levels of certainty. If you have a large company, many teams and a consistent way of working, there’s a lot you can do by measuring output and using statistics to predict possible futures (for more on that, see for instance George Dinwiddie’s ‘Software Estimation Without Guessing’Dinwiddie, Software Estimation Without Guessing.
). In most cases, my experience is that this is simply not very necessary. I prefer to keep it simple.
Legacy means uncertainty
When we work with a legacy system, uncertainty is guaranteed. That is one of the main reasons we want to get control over legacy systems: so that we can reliably release new functionality. In my book, “The Product Owner’s Guide To Escaping Legacy”Lagerweij, The Product Owner’s Guide To Escaping Legacy.
I explain how to deal with that uncertainty in a structured way. As I discussed the process of escaping legacy I have repeatedly said it is important to prioritise that work informed by the plans that you have for building new functionality. This is where those two activities come together.
When you estimate work in an area of the system that is not ‘in control’, it’s hard to know how much of an impact the legacy is going to have on the new work. On the other hand, if you have been bringing parts of your system under control in the way described in Part 2, then you will have developed a sense of how much time it takes to get into the safe zone. What I’ll be doing in this chapter is to split those two activities to have a reasonably reliable gauge of the effort needed for both. Then we can plan the work accordingly. When the process of escaping legacy is still new to you there will still be a significant amount of uncertainty, but you can isolate that risk and deal with it as early in the project as you can. That way, you build knowledge quickly, and become more reliable as the project moves past the legacy improvement work.
Categorizing the size of work is still estimation
As you know, dealing with estimates in software engineering is difficult. Even in greenfield projects the complexity of the work is such that trying for a very high level of granularity of estimation is not really possible. Like the toastie earlier, we simply can’t foresee the details of the work well enough to have much use for that. Fortunately for us, if we work on a larger project, at some point the laws of statistics will start working in our favour and work will tend to average out in size. We can use that and measure it, but doing it the other way around is much easier: just define some useful chunk sizes and break the work down to those. We will still be ‘wrong’, in that not all the work will actually fit within that bucket size, but statistics will take care of it, again, and we’ll be right enough on average. And if we pick the chunk sizes small enough, the amounts that we will be wrong won’t matter too much in terms of the entire project.
Statistics works. In larger projects, the size of pieces of work even out on average.
Slices, epics, stories: we already have our estimation buckets
In effect, we will simply be putting each chunk of work into a ‘bucket’. There’s different ways that people have been doing that, including the well known ‘t-shirt sizes’ (Small, Medium, Large), and even the whimsical ‘animal sizes’ (Cat, Horse, Elephant). You should of course do what you feel comfortable with. I usually try to keep things relatively simple and call these things stories and epics. If an epic is a slice in our story map, and is something we can finish in one sprint, then the bucket is defined already. And with a set of stories in our epic, stories get their own implied bucket size, usually broken down to one or two days per story. Anything larger than a sprint is too big to use for planning anyway, so let’s forget about those.
Story: Breakdown of the slice into smaller parts, fits within a day or two.
5.2 Building the plan
To recap:
- We plan to deal with uncertainty, to make decisions of scope and priority along the way.
- We plan to build features, epics and stories.
- We split the work using story map slices.
- A slice can represent new functionality or increased fidelity.
- A slice can also represent improved control of a legacy system.
- Estimation can be replaced by breaking up work into individual and composable parts.
- Parts (epics, or slices) should be sized so that they can be built by a team in a sprint.
Let’s put all that together. I’m taking you back to Jamie’s conundrum of building a home insurance product. We want to get to the point where we have the elements of our plan, our building blocks, created so we can see how we would like to fit them together. To do that, let’s begin by looking at that functional breakdown. Jamie has already done some work on it. Previously, there were a number of flows that touched directly on the different insurance products. When a new insurance product is added, that will have impact on all of those flows:
- Search Insurance
- Apply for Insurance
- Post a Claim
- Customer support
To get started, I’ll just copy the car insurance flows already on the heat map, and make home insurance specific versions. That is not the only way to do this, but it’s a good start for now. For each of those, Jamie will have to build a story map. Since I’ve already dealt in detail with ‘Apply’ for car insurance, I’ll stick with that one for this example, too.
It might be interesting to note that the ‘Activities’ for the ‘Apply for Insurance’ flow are the same as for the car insurance product. When we go into more detail for the user tasks we see some obvious differences, but much will be familiar.
As you can see, the first steps of creating this plan are not around estimation, or time, but are simply about figuring out what it is we need to do, and making sure we can have a discussion around that. Which is what happens next.
5.2.1 Finding the work
Having the backbone of a story map in place gives you the scaffolding you need to have a constructive conversation about what you want to build. You’ll remember from Chapter 2 - “Story Mapping”, that the user journey that is reflected in the backbone of a story map is built around the goal the user is trying to reach. Beginning the discussion with the backbone of the story map means you stay focused on that goal.
Whether you are contemplating completely new functionality, iterative improvement on existing functionality or work to improve your control of legacy system, the best way to come up with ideas is to bring people with knowledge of the business, customer and system together to generate and discuss those ideas. You’ve already learned how to organise sessions for this. The difference with the situation that was discussed in earlier chapters is that in this case you will be thinking of new functionality as well as re-discovering existing functionality.
Slices of functionality
New functionality is the most obvious of the things you can add to our user journey. Sometimes the whole journey is going to be new functionality, and sometimes we are just extending it, and creating a new variation of the flow. In this example, Jamie and friends started talking about a new flow, but very soon found out that it borrows a lot from the ‘Apply for car insurance’ flow. It’s a different product, though, and parts of the flow will be entirely separate.
Whether you treat that as an existing flow, and note that the parts talking about car information and drivers licenses are for another persona (‘car insurance applicant’), or just use a very similar flow and start anew with that is really not that important. In both cases, though, it’s important to mark clearly on the map what is already available in your system, and what is new. I tend to make it easy for myself and simply put a basic description of the existing functionality at the top, and draw a thick line underneath to mark where the new things will come. That’s easy to start with. When we worry about legacy, and whether there’s work we need to do to improve control over certain parts of that existing functionality, we do need to pay more attention and go into more detail. I’ll talk more about that a little later in the section “Slices of control”. In this case, some of the functionality from the car insurance flow can be used as-is, and can be moved up. One of the slices disappears completely! That’s a good start. Note that there will undoubtedly be more re-use of existing functionality, but in those cases the stories are still there because changes will be needed. In some cases, the re-use can make things easier, in other cases the opposite might be true.
What I’m presenting here is just a view of the shape of the map for new functionality. Of course, every system is different, and you will need to come up with your own work. The shape of the map is important, though. You see there’s a collection of stories. What might happen in your story mapping session is that you generate a bunch of stories first. Those stories are inspired by the steps in the user journey, and you just put as many as you can think of on the map. You will probably find, in the same way as this was true for existing functionality, that sometimes a story has to be promoted to a user task, and sometimes a user task is actually a story. That’s all OK.
Again as described it in Chapter 2 - “Story Mapping”, each of the stories is part of a bigger whole. A variation of the flow. Sometimes you already know these different variations, and you can put them on the board early on. Sometimes you discover them while grouping stories together. I’d like to say that I always do this in the same way, but in reality this depends on the situation: how much do you already know, how clearly structured is the work, how much of the functionality needs to be discovered? And, truthfully, what mode of thinking suits the people involved better? If you have a group of very analytical minds, there’s a good chance they come to you with specific variations and stories in mind. If you have a more creative or reactive set of people, most of the stories and structure will be first thought of during the session. Likely, you have a mix, and it pays to ensure both get enough of what they need to get the best outcomes. Perhaps start with a few known variations and then do some individual exploration of possible stories and slices. Or start with small group discussions and merge those to ensure that creativity is not quenched too early. It’s up to you to find what works for your group.
The result, though, should be the familiar slices with descriptive names around goals the user can reach, filled with stories described just by their title on a card/post-it.
After this first step, you should have a story map that describes the functionality you envision for the new feature. The slices may still be too big, but your ideas about what this feature could be, and how it could help your users (and your business) should be on there.
First, define the functionality in terms of outcomes (goals) for the user.
The slices and stories on the map are not estimated, and there’s been no real check on reality about whether that functionality is valuable enough to actually build. But the goals the user needs to reach to make use of the new feature need to be on there. In other words, and once again: the slices here, including their names, are the most important part of the map. The next step is taking those slices and bringing them down to size.
For the example, I’ve kept the names of the stories the same or very similar to the ones that you saw for the car insurance story map. This makes it easier to compare the two, and has nothing to do with saving me the work of renaming them. Well, maybe a little. When working with a team that knows such functionality intimately, often the stories become more of a continuation of an earlier discussion. ‘Select insurance package’ could be ‘add home insurance package to package selection’, or ‘Home insurance package selection’ if the packages will not be mixed with car insurance. For the purpose of the example, though, that would only make things harder to understand, so I didn’t do it.
Slices of fidelity
There’s two ways to split an existing slice of functionality: only deliver part of the functionality by working on stories of only some of the user steps in the story map’s spine, or deliver a full flow but create simpler versions of (some of) the stories. Well, you can also combine the two. The first option is something we’d call incremental, the second iterative. Let’s see how these two options work for the home insurance case.
Incremental slices
The first option is where we deliver an incremental part of functionality. In terms of the home insurance flow, Jamie and friends can decide that their first slice of ‘successful basic application’, is still too big. Though the slice is well-defined in that it offers some value to a user, they expect to take much more than a sprint to implement it and want to pare it down to some coherent steps. One of the reasons they expect it to take longer is that this will be the first time they integrate with the new BigInsured APIs, and they expect that to be a little more involved to get it to work right. They decide that they are going to split the slice up into one that is specific for the integration with BigInsured, and another to focus on the flow around an accepted home insurance application.
The story map’s backbone doesn’t really have a suitable place for the very technically focused validation of that integration. If you like to make that explicit, it’s perfectly valid to add a user task to reflect a system action like that. If you do, it makes sense to use the ‘Persona’ marking to show that this is indeed an action initiated by the system, and not by the user. In this case, I’d also be comfortable to slot these stories under the ‘Send Application’ activity, and still see this as part of a user triggered action.
Iterative slices
Incremental slices are generally easy to accept and use. Everyone is used to the idea that work needs to be split into tasks to be able to deliver it, and that those parts need to later be combined to form a whole. Iterative slices are less intuitively clear. The idea of iteration often seems almost insulting to the type of busy professional that is working hard to try and build the software their customer wants: why would you build a simpler version first, and then be fine with throwing (part of) it away? It sounds like madness and a huge waste of time! It is, however, the single most effective way to build flexibility into your plan and reduce risk.
Iteration on fidelity is the single most effective way to reduce planning risk.
When we looked at story maps in Chapter 2 - “Story Mapping”, I mentioned that when I mark slices on the map, there’s two different lines: the one that is normal in ‘standard’ story mapping is a thick line representing a (potential) release to end-users, while I use a thinner line to mark slices that represent the result of a single sprint. As we just saw, it is obviously not possible to release an incremental slice to customers. You might push the code out into the world, but the feature that you’re building will not be usable or visible to the end user. That means that when you work incrementally, you can’t draw that thick line on the map and see a possible place to release. When you work iteratively that does become an option. You could release the result of an iterative slice, and some version of the feature could be in the hands of users, delivering value. Whether you think this particular iteration is easy enough to use, or complete enough, becomes a decision. We’ve just given ourselves an option.
Let’s look at some options on this story map. We just split our first slice into two incremental ones. We’ll leave the first slice as is, to deliver confidence that the integration with the BigInsured API works. The second slice then contains the stories that were left over from the original ‘successful basic application’ slice, which deal mostly with providing information about the home and the homeowner, as well as accepting the offer. Jamie’s team will tell them that the ‘Select Insurance’ and ‘Accept Offer’ activities are very similar to the ones they have from the car insurance flow, and will not take too much time to adjust. The ‘Provide data’ activity, however, deals with completely different data than before and needs completely new sources of information to make the experience just as smooth and quick as for car insurance.
Focusing on the ‘Provide home details’ task, you can see that there’s a few stories there. One is about providing the address, one about the house replacement price, and one about ‘basic home information’, which includes the square-footage, number of bed- and bathrooms, age of the house, whether it’s a flat, terraced, or (semi-)detached structure, and the type of roof. Those three stories can be built in different ways, and with different fidelity.
I’ll start with the address, which is a simple and familiar one. Almost every e-commerce website has a way for you to provide an address. It’s usually two lines for street address, a postcode or zipcode, a city, and maybe a state or country. The most simple version of this is to have a form with those 6 fields, that simply sends that information to a back-end where it is stored. But some of the fields could have some business rules attached. There’s a predefined list of countries, so we can replace that entry field with a selection pull-down. For some countries, we can do the same for states, for others we can remove or disable that field. We could verify that a zipcode is appropriate for the state, or for the city. This will help the user enter correct information, so that there’s a better chance that their application can be processed and accepted.
Even the simplest functionality can be split in iterations of fidelity.
That is two steps of fidelity already. We can also let the user provide the zipcode, and automatically fill in the city and state. In some countries, like The Netherlands, the zipcode plus house number are enough to identify the full address. This also makes it more likely the information is correct, and is much easier for the user! Of course, to do that, we’ll likely use an external service to provide the information, or we have to load it ourselves. That is quite a bit of additional work, but it provides additional value. For some applications getting that address data right can mean that the last option is really what you need to have before you allow a larger number of users into the application.
The same type of slicing can be done for the ‘Basic home information’ story. Home information can perhaps be, at least partially, retrieved through a service. Price information might be available elsewhere. But a good start is to allow the user to provide the information. When it’s retrieved from a trusted source, fewer checks might be necessary in other parts of the flow, perhaps even manual checks. Sticking to the simple version is always a tradeoff. But that is the point: being able to make a tradeoff, having different options available, gives you the possibility to craft a plan specific for your circumstances and have those different options available when the situation changes. You might have wanted to go for the fully automatic version, but the dependency on that external service caused problems that you didn’t foresee. Unfortunate, but with a temporary investment in manual checks of the home information you can still release in time for that peak period you were aiming for, and the additional customers you gain because of that more than offset that temporary cost. Creating a plan means creating options.
Creating a plan means creating options.
Slices of control
Of course, you’ve just read a whole book about how to get parts of a legacy system under control by making sure the functionality is both well-defined and appropriately tested. How does that work come into this? It comes first. Let’s see how.
Control comes first.
When you are changing an existing flow, the way that works is the most straightforward. The steps of the process of dealing with legacy come into play: you make sure the story map is there, identify the slices by looking at the variants of the flow, and look at which parts, which stories, have the sort of business rules that we need to look into in more details. When you’re going to be extending the existing flow, you look at what parts of the flow you’re going to extend or change, and what types of business rules might be getting added. That helps you identify what slice, and which stories really need to get tested. And the process of getting those stories under test becomes its own slice. Or slices.
To see an example of that, let’s step back from home insurance and pretend that the change we are making is adding an insurance package for classic cars to the car insurance flow. The flow remains mostly the same, but we need to add some classic-car-specific data to the car profile, with a different set of possible manufacturers, fuel types and years of build. We also need to change the offer calculations based on different tax and risk profiles for these cars. A new slice, or maybe two, to add to our story map.
When we add this slice to the story map, we see it’s going to be a variant of the existing flow, with changes in the ‘Provide car details’ and ‘receive offer’ parts. As we identify the place where the changes will occur, we can step back and consider whether the knowledge we have of those areas is sufficient. We can whether the testing is enough to give us confidence. Let’s say that we have sufficient high-level, low-resolution tests for this flow. We also know that we have very little testing of the business rules around the offer calculation. Additionally, the developers know that any changes in the ‘provide car details’ screens and backend services tend to be error-prone. That means there’s work to be done to get this part of the system under control.
You already know how to do this work. First, you ensure that the current functionality is investigated by doing example mapping for the relevant areas. In this case, for ‘provide car details’ and ‘calculate offer’. Once that is done, you may, based on your findings, decide to do some refactoring of the ‘provide car details’ screens or backend to make it easier to add new data or screens. That means your slice of control could look as follows.
We simply insert that slice before the one where we build our functionality. By making that work of preparing to change the legacy system explicit we remove a lot of the risk and uncertainty from it. Not, to be sure, all the uncertainty. You can’t know exactly how much time these things will take, especially if you haven’t gone through this process very often yet. Once you have, it will become familiar, and reasonably predictable. Most importantly, though, this way it will be explicit and isolated. And the work to build new functionality on top of this will become much more predictable.
You will have to spend this time. You can either plan for it, or be unpleasantly surprised by delays later.
If we go back to the case of the home insurance project, it is a little less obvious how to include the control work into the plan. After all, this user flow, and its story map, are new, so how can we identify where it interacts with existing legacy? This is a little more indirect, but not too far removed from the situation above. You will, as with the extension of the car insurance flow, need your people involved. If you look at the very same situation as outlined for car insurance, as soon as you start talking about the first few slices of work for home insurance you will identify the same risks. In fact, with the specific initial slice for the integration with BigInsured APIs, you are already taking a similar action. In the same vein, if the calculation of the estimated cost for car insurance is in a part of the system that is not well understood, you very likely will need to ensure you don’t break any of the existing functionality there when adding the home insurance estimate calculations. So that story in the map above ‘Create scenarios for existing package calculation’ will still show up, but in the home insurance map and be inserted as a slice of control work before working on the ‘Provide home details’ flow.
The important distinction is that here we lean much more on the developers’ understanding of the structure of the system to identify which areas are going to be impacted by new functionality, and whether those changes can be safely made. But do take the time to identify those areas, and plan to get those areas under control by targeted testing and refactoring.
The work needs to fit into the bucket
As I touched on before, in the section “Categorizing the size of work is still estimation”, I want to split the work in such a way that it fits in well-defined categories of work, and in the size associated with those. That means, quite simply, that I still need each slice to be of a size that it can easily fit into a sprint, and each story to be only a day or two of work.
The primary means of making that possible is playing with fidelity, and splitting off either increments or iterations of the functionality so that each sprint can deliver a coherent step in the right direction.
If you only think a slice will fit in a sprint, split it further until you know.
When considering slices of control, it can be harder to predict whether the work fits in a sprint or not. When you’re building a plan, the rule is very simple: err on the side of caution. You will most certainly be too optimistic in some areas of your plan, and encounter many surprises along the way. Unless you are completely confident, based on experience and initial investigations, that the work is easily contained in one sprint, and one slice, find a way to split it over more than one.
5.2.2 Dependencies
Believe it or not, but what we’ve gone through so far has been the simple part of planning. After all, we’ve just been deciding what the work is that we want to plan! In the area of dependencies things tend to get a little more complicated.
In “The Product Owner’s Guide To Escaping Legacy”Ibid.
, I talked about the way dependencies are usually treated within prescriptive and less prescriptive methods of work. I said there were two main types of dependencies:
Dependency on people: The person or people needed to do the work are also working on another piece of work.
There were important differences between the way a prescriptive method and a less prescriptive agile way of working deals with dependencies. In a prescriptive situation the main way of dealing with dependencies is to ‘manage’ them: keep track of dependencies and plan around them. In the agile situation the main way is to change the system, meaning the team structure and way of working, in such a way as to minimize the occurrence of dependencies. In practice, you need a bit of both to handle any situation.
‘Dependency’ is another word for ‘waiting’.
The best dependency is a…
Of course, it is true that the best dependency is one that never occurs. And with ‘never occurs’ I mean that we do not have to explicitly take it into account in our plan. The way to ensure that happens is by creating a ‘whole team’, or ‘multi-functional team’. The goal there is to have all the skills that are needed to do any part of the work available within the team. The team will then internally make sure that the work is completed, without anyone outside the team needing to manage things to the right people are available and involved.
The bare-minimum version of this is the team that includes software engineering and testing skills. It could also mean you have people skilled in front-end development, back-end development, database administration, design, testing, infrastructure and operations and perhaps other skills in your team. There’s always some friction between having all the skills and the size of the team. There can also be complications with the amount of work there is for a specific role, in comparison to other work that regularly happens in the team. You could write a whole book about those sort of complications, and luckily for both you and me, Manuel Pais and Matthew Skelton have indeed written the excellent Team Topologies bookPais and Skelton, Team Topologies: Organizing Business and Technology Teams for Fast Flow.
). We will see a few of the patterns they describe in the next paragraphs.
Between teams
The first way the ‘whole team’ approach to avoiding dependencies breaks down is when we find we have dependencies between teams. There’s a few different ways in which that can happen, depending on the types of teams that you have in your organisation. Since this chapter is all about planning, I can’t go too deep into each and every way to avoid or deal with dependencies, but it is important to have an idea of what you need to do so that you can incorporate any necessary work into your plan. It’s also good to be aware of some of the shortcomings in the way your teams are structured, that are making the work of planning more difficult. Always try to make your work easier.
Always try to make your work easier. Even if that is hard work.
One way you can have a dependency between two teams is when both teams are building functionality, but one is dependent on another to do some work before they can continue that work. In Team Topologies parlance, we’re talking about two ‘stream aligned’ teams that need something from each other.
A common pattern is that both teams have different functional areas of the system that they are most familiar with, and a new feature one team is working on needs changes in the other team’s area. In the most extreme versions of this teams might ‘own’ different systems (services) altogether, and the other team doesn’t have access to the code. In less extreme versions, there’s still written or unwritten rules about making changes in another team’s area that can mean a wait until that team has time to work on the other team’s feature.
The best way of dealing with this sort of change is some level of shared code ownership, so that the team that needs the change can do it, perhaps getting some help from the team that has expertise in the area, to avoid making any mistakes. Remember that building in space, in the sense of time available to any team, for that sort of mutual aid is crucial for a smoothly running organisation. If all your teams are crammed full of work and do not have time to help anyone else, that is a great way of ensuring your whole organisation can’t deliver.
If there is no slack for people to help each other, pretty soon everyone will be waiting for everyone. Take a step back.
The easier it is for teams to work in each other’s areas of expertise, the more smoothly you can expect these types of dependencies to be handled: less work to do mean it is easier to quickly get it done.
Another type of dependency between teams is that of the specialist. If the work requires input from someone with a special skill set, such as security, design, or machine learning, then you might need to arrange for someone from a specialised team to aid in the work temporarily. The teams that are specialised in this way usually exist separately because their help is not needed too frequently, and often teach and coach the other teams to be able to do some of the work independently even if they are still needed to contribute when there’s a more extensive or complicated change needed. In Team Topologies, such teams are called ‘enabling teams’. It is important to identify that you will need some help, and to involve such specialists in you planning and design activities so that they can add their insights on when and where they might be needed.
This is a good example of where you actually have a dependency on another team, or person, and need to consider it in your plan. A slice, or story, where such help is needed has to be clearly marked and communicated, so that the enabling team can ensure they have capacity.
The need for (technical or other) expertise is a dependency to note in your plan.
Again, it is very important to recognise that if this sort of dependency happens frequently you need a change to the (organizational) system, restructuring teams or acquiring expertise. That expertise needs to be added to the team(s), either by coaching or hiring, to avoid delays caused by simple bottlenecks. Ensuring that every team can do basic work in security or machine learning will mean you have to ask for help in execution less often, and only in complex cases.
For the cases where you really do need that help, it can very well be that you have to adjust your planning and priorities to accommodate the scarce availability of your experts. So of course you try to avoid that as much as possible!
If you need a certain kind of expertise frequently, change your team composition, or spread the knowledge.
There is one other type of dependency between internal teams that I should mention, which is the one between teams that created around classical siloed types of work. Examples of such teams are ‘Business Analysts’, ‘Testers’ (or ‘QA’), ‘Design’, ‘Development’ and ‘Operations’. In those situations the dependency is purely artificial, but it is still one that I encounter fairly frequently, especially in organisations that have legacy systems. When you do run into that situation, it is extremely serious. The reason it is so serious, is because this particular case means that the dependency will be there for every piece of work that you do! That means that it has a stranglehold on your whole development process.
If you do find yourself in that situation the best advice I can give you, apart from restructuring the teams to break the silos, is to break the silos. Even if you do not have the means to force that organisational change, it may very well be possible to build the relationships between the people in those silos to reduce the impact. That means getting close to the leaders of those silos to get people (even just temporarily) assigned to your teams, in such a way that you can get them to work closely together. Emulating a real ‘whole team’ in this way is not ideal, but it is the only way to avoid the high tax of continuous handovers you will be stuck with otherwise.
On other organisations
What happens, though, when the dependency is not within your own organisation, but is instead external? Our friends at InsAny do indeed have a situation like that with BigInsured, and have already identified that as a big enough risk that they’ve pulled the work of integrating with their partner’s APIs forward as much as possible.
This is a much more difficult situation. You have much more limited influence on the planning of the other organisation. You have even less influence on the availability of their people to support your teams. When a delay happens, you have few options. When a change in the way to integrate with the external party happens, it can be even worse for your own plans.
Some of this is outside your control, and goes into the territory of contracts. I’ll not pull a discussion of that into this book. But when crafting your plan, there are some things you can do to deal with these sort of dependencies.
Decide up-front what you will do when timelines of external dependencies slip. In the moment, the temptation to push on is too strong.
The first thing to do is to be very clear and explicit about the expected deliveries of the external party. If BigInsured does not have a test environment with the appropriate test data available by the time the sprint starts that deals with that integration, that means a significant delay for the whole project. Those types of crucial moments should be documented, and the decisions you make based on that situation should probably be made up-front: a delay of one or two sprints might be acceptable, but if the delivery is after the start of the third sprint, the project can’t shoulder that burden, and we will need to signal significant delays or stop the project. Making those calls up-front can seem uncomfortable, but if you don’t, the sunk cost of the work in progress can make doing the right thing much more difficult.
There are a number of technical practices that can help make it easier to deal with shifting releases from external dependencies. Those can also be useful for some types of internal dependencies.
Managing dependencies with technical skills
You do not have to know all the details about the ways in which development teams can handle dependencies at a technical level, but it helps to be aware that these methods exist. More importantly, you need to be aware that it is almost always helpful to employ them to manage the risk of depending on systems that you do not develop yourself.
The best way to describe these techniques is to call it decoupling, limiting the interdependency of two parts of a system. The term coupling is important in computer science, and is seen as one of the most important aspects of software design. A simple way to describe the term is that two parts of a system are more tightly coupled if, when you change one part, the other also needs changes to keep working correctly. To avoid that happening, one of the things we do is to try and separate the two parts very strictly, by putting a ‘contract’ between then. You probably hear that referred to mostly as an ‘API’, but these exist at all levels throughout any system.
One way to technically manage a dependency is, then, to very clearly specify the contract between the two systems or parts of a system. Doing that in detail means that the two teams or organisations can work independently, but still be as prepared as possible for when both systems get connected (‘integration’). The best way to do that is to actually create the integration in the form of code, and write tests for that integration. This is where the term ‘integration test’ comes into play, but since that term is misused in many places we will use a term for a more advanced form of integration tests called ‘contract tests’.
If you write tests that capture how you will be calling an API, when the API is delivered, and you can run your tests against it without them failing, it’s likely everything will work as expected. If you also share those tests in a way that the team you are dependent on can run them, they can be sure while developing their part that they won’t break the contract. All this is so that there’s some way to know as early as possible when expectations are not going to be met.
Now, that is fine if you are in communication with the other team, and you can share those contract tests. That is often not the case, and teams then build more extensive integrations to such a third party system, just to be able to at least write contract tests and to have a place where they can deal with smaller changes in a third party system. A ‘gateway’ towards an external system, that can transform responses into what is expected by the internal system, can isolate any problems that can occur when your dependency delivers a surprise late into the project.
There’s different terms for these types of practices. Consumer Driven Contracts, that only specify the part of the interaction with the third party system that you actually need, no matter how complicated their API might be, can be combined with a ‘gateway system’, or a more local ‘adapter’, and protected by contract tests to give you the maximum security against surprises from an external dependency.
Depending on an external system is risky, and always warrants doing extra technical work to minimize that risk.
Again, you can forget all the details, but know that when there is a risk to your plan because of integration between different systems, especially external systems, these techniques are there to minimize that risk and should be used.
Managing dependencies with communication skills
Whenever possible, a dependency is best managed by closing the distance between the two parties. Especially when you are dealing with different teams in your own organisation, there’s never a downside to actually working together.
Working together can take many different shapes and sizes. Some advanced agile teams will simply invite an expert (whether that is expertise in an area of the codebase, or in an aspect of the work such as security) to work with them. Not handing off the work to them, but working together with the team as an ensemble (‘mob programming’) or a in a pair with one of the team members. That way they can bring their knowledge into the team, making the team better prepared for the next occasion when there’s a dependency.
There’s never a downside to actually working together.
A more limited approach can also deliver considerable value, however. Ensuring that there’s common design sessions where the external expert works with the team to look at how a change can be technically implemented will give a lot of value, and still allow the team to do most of the work themselves instead of being dependent on another team or person. Say a change is needed in an area of the system that is part of another team’s domain, getting a senior from that other team to discuss how the change should be made will remove most of the uncertainty and doubt and allow the team to move forward with confidence and speed. If the involvement of the other team comes only when the change has already been created, in the form of a review or ‘pull request’, then there’s a significant chance there will be issues with it and that it will require rework. Avoid that sort of late communication.
Of course, the contract definition, or API definition, I talked about earlier is also one of those areas where close coordination will do wonders and ensure everyone knows what to expect. Wrapping that sort of common understanding in a contract test will help cement the agreements, and work to document decisions in the same way our examples and scenarios do for functional changes.
Managing dependencies with management
When all of these approaches fail, and there’s no way you can limit the risks for your dependencies, then your project is in real trouble. How much trouble depends on, well, your dependencies. It depends on how likely it is that one will not deliver in time. Or not at all. It depends on how much of a delay they will have. And it depends on how likely it is that they will deliver what you need and expect them to deliver.
Here be dragons.
This is where you indeed have to ‘manage’ dependencies by checking on progress, and adjusting to changing circumstances. Hopefully, you’ve given yourself as many options as possible to either still have a lower-fidelity working feature without that dependency, or at least options to delay work on it while still delivering usable and valuable versions of your feature as you wait.
There’s some that would like to make these sort of dependencies visible, and I’ve seen people go wild with red yarn across planning boards to show them. You do need to keep paying attention to them, but if you ever get into a project where you need such visual aids to keep track of that many dependencies, the project has already failed. Avoid getting into that situation.
5.3 Planning
All the work we’ve been going through in this chapter has been to build a plan: thinking through different configurations of functionality, splitting it up in to parts that can be combined in different ways and order, ensuring we know about any real dependencies and deal with them as best we can. That is the real work of planning: creating options.
You may have noticed that in this chapter, I have not talked about time much. Though the word planning tends to be associated with timelines, Gantt charts and deadlines, those are actually the easy part, once you know what the actual work looks like.
The real work of planning is creating options, timelines are just an outcome.
5.3.1 We have the building blocks of the plan
Let’s recap what we’ve accomplished so far, and place everything in the right context to start looking at those planning timelines.
- The type of things we plan are slices and stories
- Each slice or story fits in a ‘bucket’, or time-box
- We use story mapping to split any new functionality into slices:
- Slices of functionality: help the user achieve a (variation of) a goal
- Slices of fidelity: split functional slices into incremental or iterative parts
- Slices of control: deal with legacy: document, test, improve
- We deal with dependencies:
- By removing them as much as possible by putting the right skills together in the team
- By capturing them in detail using contracts and tests
- By isolating their impact
- By close cooperation with other teams when necessary
- By keeping track and adjusting expectations if there is no other way
All of that gives us different options, different paths towards building the functionality that we need. The way we went about this also gives us priority, either through the order of control and fidelity slices, or because of the dependencies we have identified. Let’s see how we can create a planning timeline from the options that we have.
5.3.2 We can slide the building blocks into place on a planning timeline
Creating a planning timeline is much simpler than the work we’ve done up until now, and it’s simpler because of that work. Time is linear, and progresses in fixed length increments. Even if you are not using sprints or iterations, we have defined the unit of planning as being the slice, and the slice is a time-box. My favourite time-box size for slices is a week, because everyone already knows it and has a good feel for its ‘size’. But you can use two weeks if you have to. I’d not recommend going any larger, because it will make your planning significantly more uncertain.
So here we are, with a fixed set of time-boxes, and a set of slices that can be placed into them.
As you put our slices on the timeline, you can make some initial choices. Obviously, it makes sense to ensure you put slices of control before any slices of functionality that build on those areas. Since you’ve made both fine-grained, that should be easy to do. If there’s any slices with a particular risk, such as a dependency or a particularly difficult piece of work, try to pull those up as early as possible. The first home insurance slice of the InsAny team is a good example of that.
The more interesting choices are around our slices of fidelity, and most especially those where you were able to create iterative slices. Those are the place where we can actually make decisions on scope! I cannot overemphasize the value of creating those iterative slices and being creative in finding configurations that enable a possible, even if not desirable, early release of different slices of functionality. Contrary to what you might be used to, the best approach to getting things done is to take the first iterative slices of different ‘slices of functionality’, and get to a point where that functionality could be released. By delaying the work on the later iterative slices, the ones that get those different pieces of functionality to the point where you want to release them, is how you manage risk in your planning.
If you stick to those rules, you will end up with a plan, and a planning timeline, that gives you the most options, and the best chance to deliver an acceptable version of your functionality as early as possible. Once you get things to that point where you could release, releasing becomes a choice. If you try to get to the point where you want to release for each part of the whole, you are denying yourself that choice.
Get to “could possibly release, but I don’t want to” as soon as you can. After that, releasing becomes a choice.
5.4 Dealing with time constraints
Once you have a planning timeline, you have knowledge about when you could conceivably have something that you could (even if you don’t want to) release. That is valuable information. It’s also wrong. Or at least, not reliable. Remember that we are still dealing with estimates, also known as guesses, about slices fitting into their time-boxes. On top of that, there tend to be surprises along the way. So be careful about relying on the planning timeline.
Still, with the timeline in hand, we can make some sort of prediction about the future, no matter how flawed. If you know when you will be starting the work, you can even attach dates to possible places you could release. If you do, it is wise to still have a considerable margin in place if the dates you come up with are communicated to the outside world, or to anyone that might base their own planning on them. It’s always embarrassing if the marketing department has bought advertising time, and it turns out you’ll not be ready after all.
‘Deadlines’ are useful input to our planning, but can’t be taken at face-value.
Sometimes, though, there are dates that are important that are not directly driven by your planning timeline. The information shared by Roger, InsAny CEO, that there’s a significant peak in home insurance sales in September, is an example of external forces driving us to deliver at a certain time. Some people call those dates ‘deadlines’. I don’t, because that term implies that they are much firmer than they generally are in reality. Not being in time for the September peak might cost the company money, and every week they are later would add to that amount lost. But there’s worse ways of losing money than by not being able to sell, such as loss of customers of other products and loss of future customers, which might happen if an incomplete or quality impaired product was rushed to market.
Still, if you know of such dates, you can put them next to your planning timeline and use that to make choices. And if you see you can’t make the date, sometimes there are options outside your normal control, that still could be employed to help by removing parts of the needed functionality. In the case of InsAny, maybe the initial home insurance package could be offered, but upsell to theft-insurance could be done manually after the initial sale. Or perhaps, if the integration with BigInsured does not work as expected, part of the approval process could be done off-line to still capture some of that sales peak. These sort of measures are certainly not what we want, but could be a way to skip or simplify whole steps of the user flow to accommodate externally defined dates.
None of this is very specific for legacy systems, except that these types of situations tend to happen more often in the case of legacy, simply because moving fast is difficult. The more parts of the system you get under control, the more predictable you become. The more parts you then manage to refactor, the faster you become. And no matter how fast you are, you will still run into these types of situations, because the world always moves even faster.
I’m sure that some of you are reading this and wondering what to do when you have more than one team to deal with. I have gone on about the importance of having teams that are able to work independently, and stream aligned teams. And ideally, the timeline view of the work of multiple teams has each team’s work as separate and independent. I would like to add that it is still very much possible to have multiple teams working on this type of project where they actually help to deliver the intended functionality earlier. As long as there’s enough shared knowledge of the system and the domain, and you encourage and enable the teams to work closely together.
This is not simply a matter of just dividing the slices you’ve come up with across two teams. That would just mean they got in each other’s way. It is also not, you will notice, a matter of creating dependencies between the team with one delivering a piece of work so that the other can use it in a following sprint. This is team work across teams where, for example, team 1 is working on configurable packages, creating an API that team 2 is using, in the same sprint, to provide the user the choice of those packages in the flow. This requires close coordination and real, continuous, communication between the teams. Though delivery is earlier, it is not twice as fast. That sort of coordination does still have an overhead. But it could be worth it if the market demands are there.
5.4.1 Uncertainty means slack
I mentioned above that you will run into situations where the work you expected to fit into a time-box doesn’t. When that starts happening, your whole planning timeline shifts back. It can be tempting to say that taking one day more for a slice is not that big a deal. After all, your timeline only shifts back one day, right? That seems logical, but unfortunately does not work like that. This is the other side of statistics, and just as inescapable.
If your slices are conservative, meaning that you are fairly certain that you can comfortably complete the work within the time-box, with time to spare, an occasional surprise that shifts some work to the next iteration can easily be absorbed. If your slices are less conservative, such surprises can mean that all the work starts shifting. The sprints will already be under some pressure, just to get the planned slice completed, and any additional work simply increases the pressure to the point where mistakes get made. In such situations, very quickly, quality is impacted, necessary improvement work gets skipped, teams can’t help each other anymore (see “Dependencies”), the planning timeline becomes completely unreliable and, congratulations, you find yourself building more legacy.
Make your slices small. Ensure each iteration has plenty of slack in them, so the team can do their work to a high standard and has time to help each other as well as other teams that depend on them. It is counterintuitive, but the best way to go fast is to not be in a hurry. Anytime you start hearing terms such as ‘sense of urgency’ or ‘commitment of the team’ you are already far down that slippery slope to legacy and a stalled engineering organisation.
Even with all that, with enough slack in your sprints, you will still run into surprises! Especially if you’ve not been going through the cycle of getting your legacy under control for very long yet, these surprises will often be in the slices of control. If you do have externally significant, or even just internally ‘committed’, dates make sure you still apply a buffer of a number of iterations before them. And do everything in your power to at least get a ‘could release’ moment as early as possible in your timeline, so that the ‘want release’ one will not cause stress.
5.4.2 Iterate on your planning
It should go without saying, but here I am. If you do forge a plan, and create a planning timeline, you should revisit that plan regularly. Every week, or sprint, is a good rhythm. Check on progress. Check that you are indeed still easily delivering each slice. Check that those few dependencies that you have to manage are still on track to be available when you will need them. And adjust. And by adjust I mean reduce scope, remove some of those fidelity slices, or communicate the shifting timeline. But keep reacting and adjust your plan frequently.
5.5 Keep it simple
The specifics of what I’ve presented in this chapter are how I deal with planning. They are really simple. That doesn’t mean it’s easy, and certainly doesn’t mean that it’s not a lot of work to do well. You can use different techniques for getting to the requirements, and for how you deal with dependencies. That’s fine, and I do that all the time, depending on what the teams and organizations I work with are used to.
There’s only a few things that are important to me. The first is that to plan, you need to know what the work is! At least at a high level. You can’t create a plan without actually thinking about what you are going to make. The next is that you need to really define the building blocks, the units that you use, the lego blocks that you build your plan with. I’ve used slices and stories, and created clear buckets for them defining their time-boxes. It doesn’t matter too much what you use, as long as you are specific in your definitions. Don’t try to be much more careful about estimation, it won’t help and will make everything more complicated.
Another is that the only way to have a real chance of delivering a successful project is by becoming very good at delivering slices of fidelity. Iterative development of functionality is counterintuitive for most, but is the most important tool in the tool chest of product owners and project managers. You cannot do this without it.
The last part is that all this should be kept as simple as possible. The description of how to deal with dependencies doesn’t advise you to try and setup your teams and organisation to avoid dependencies for nothing. Complexity kills. If it’s difficult for someone to understand your planning in 5 minutes, without an explanation, you’re in trouble.
Know the work. Take small steps. Get in control first. Don’t rush. That’s it. Keep it simple.
Acknowledgements
This book is partially based on my earlier book “The Product Owner’s Guide To Escaping Legacy” (LagerweijThe Product Owner’s Guide To Escaping Legacy.
), in which I discuss in detail how to use the practices described in this book to regain control over a legacy system. In that book, I thank all the proof readers and inspiration I received while writing it. In this one, I’ll be shorter and just thank my wife and constant partner in crime, Suzanne, who helped me refine the material into workshops and trainings, and to make it clearer and easier to understand.
Appendix A - Workshops
As part of the work of teaching and coaching the practices in this book, we have developed a range of workshops over the years. Each workshop teaches specific practices and skills, and has some supporting materials that can be used to deliver the workshop. In this appendix I will list the materials (slides, handouts, print material) as well as a short description of the workshop so that readers can pick up the materials to run their own workshops. All this material can be used, but I expect any of my branding to be kept on the materials.
Since we also deliver these workshops at conferences, the descriptions here are often from the proposals sent in for those conferences. Others are from the leaflets that we use to promote the training programs we deliver to clients.
Skinning Cats: Dynamic Discovery and Planning
No cats have been harmed in the making of this workshop. The saying goes: “There’s more than one way to skin a cat”, and there is also always more than one way to deliver that feature.
The workshop uses a simple example of an ecommerce site that sells music (CDs and Vinyl) and posters. In our workshop, the shop has decided to go into the market of ticket sales, taking on ticketmaster and selling concert tickets. We provide a ready-made story map, with stories created and planned releases and planning slices. During the activities we change the circumstances by providing deadlines, and marking specific stories as complex, or having external dependencies.
Abstract
Iterative delivery has always had friction with the need to present a plan that is easy to digest by those that stand a little further from the work: stakeholders and managers. In this workshop we present a way to seamlessly integrate the dynamic process of iterative discovery with the need to present A Plan.
We guide you through the process of planning and dealing with risk, using the familiar tool of Story Mapping, extended to create a natural fit between our way of working iteratively and incrementally and presenting a plan. You will learn how to easily create the type of roadmap view of a plan that shows expected timelines while keeping different options and variations on the plan open and transparent. Since we often need to plan in difficult circumstances, we show you how to deal with deadlines, risk and technical debt in your planning.
Shorter version of the abstract
Iterative delivery seems at odds with the need to present a plan that is easy to digest by those that stand a little further from the work: stakeholders and managers. In this workshop we present a way to seamlessly integrate the dynamic process of iterative discovery with the need to present A Plan.
We guide you through the process of planning and dealing with risk, using Story Mapping, extended to create a natural fit between working iteratively and presenting a plan. You will learn how to easily create the type of roadmap view of a plan that shows expected timelines while keeping different options and variations on the plan open and transparent. Notes for the program board
Intent
This is a new workshop, based on work with many clients and on Wouter’s book: “A Product Owner’s Guide to Escaping Legacy“. In the workshop we discuss the challenges of aligning the dynamic nature of iterative discovery with the need to present a simple and linear plan to management and stakeholders.
In the process we also touch upon some misconceptions about planning that have been known to cause issues. We also deal (depending on the time available) with the complication of deadlines, and with the difficulties encountered when planning in a legacy system.
Outline
00:00 - Introduction
00:05 - Planning vs having a plan
Activity:
- What is the goal of a plan? What was a successful plan for you? What was not?
- Group discussion and sharing results
- Planning is creating options
- Planning is dealing with risk
- Planning is understanding when to make decisions
- “A Plan” is usually a flattened version of the above, expected to represent the timeline only
00:15 - The iterative approach: overview
- Presenting a fully worked out Story Map that is the base of further activities
- Short intro to Story Mapping
- Story Map as a visual model of iteration and incrementalism
- Story Mapping extended: slicing it finer and the importance of naming
- The link to sprints and sprint goals
- Iteration and Fidelity
- Marking the release points
00:25 - First iteration of making a plan
Activity:
- Map the slices on the Story Map to a timeline (pre-printed materials)
- Share: when will you be done? When will you release?
- The portfolio / high-level view of a plan
- Sprint goal, or delegated decision making
- The power of abstraction (communicating at the right level of detail)
- Which decisions can you take, and when, as part of this plan? How could that be represented?
00:45 - Deadlines
- Deadlines: real or not?
- Iteration, buffers and communication: transparency for our margins of error
Activity:
- Deadlines added to the plan, what decisions can be made?
- Share: Which deadlines do you accept? How did your plan change? What could make this process easier?
- Iteration size and the value of smaller steps, smaller sprints (“We’re going to need a smaller bucket”)
01:00 - Complexity and legacy
Activity:
- Some stories get marked as being complex and having unknowns. Some as needing changes in messy parts of the code. How do you adjust the plan?
- Share: options for changing the plan
- The value of the whole team: engineering input, ux input
- Marking risk
- Isolating risk
- Slices and focus: name/goal/flexibility in scope even for small slices
- Adding slices for legacy control
- Show the full picture of a changed Story Map and plan
01:15 - Share and document the plans created
01:25 - Q&A and closing
Learning Outcomes
- How to use iterative development and Story Mapping as the base of solid and reliable planning
- How to use iterations and increments to build in safeguards to hit any deadline. Or at least the real ones.
- How to identify risk and technical debt and seamlessly integrate dealing with those into your planning
Materials
- Slides: Skinning Cats - Dynamic Discovery and Planning
- First version of the story map with just the initial stories: Story Map Story Generation
- The story map with the release slices marked: Story Map Release Slices
- The story map with planning slices marked: Story Map Planning Slices
- A page with the slices to put on the timeline: Slices in black and white
- A page with the slices to put on the timeline: Slices with background color
- Empty timeline without any deadlines marked: Timeline without deadlines
- Empty timeline with deadlines marked: Timeline with deadlines
- Empty timeline with deadlines and dependency dates marked: Timeline with deadlines and dependency dates
References
Todo:
- Add appendix with examples from workshop?
- Add appendix with LLM use?
- Publish ebook/print book




