Anyone who has worked in an agile organization has found that certain projects don’t quiet fit the agile mold. Nowhere is this more apparent than with research oriented projects. After all if there is complete uncertainty in the scope and outcome of a project, as would be the case in a research project, how do you create user stories and estimate story points? And if you can’t create stories and estimate the associated costs how can you hold your team accountable, communicate status to the rest of the organization, and make cost / benefit tradeoffs? Simple! You can’t.
I’ve personally dealt with this issue after hiring several researchers to work on an agile software product team. Initially, I struggled to interleave our research projects with our other production work so I started looking for a solution. The answer to my problem came after reviewing the agile literature and the scientific method and concluding that research projects really just represent an extreme of what the agile process is ultimately trying to solve. Below I will walk you through how I arrived at this solution and details on how you can apply similar tactics in your own research organization.
Early in my career at Microsoft someone handed me a copy of Steven McConnell’s book Code Complete.
At the time my greatest take away from that book was the concept of the “Cone of Uncertainty”. The “Cone of Uncertainty” states that the uncertainty of a given project decreases as time progresses and more details are flushed out.
Historically the “Cone of Uncertainty” was dealt with by creating detailed upfront plans and using waterfall project management approaches. The trouble with those methodologies is that they’re extremely resistant to scope change. Largely because scope change reintroduces uncertainty.
The agile manifesto attempts to eliminate the “cone of uncertainty” problem by following the principle of “Responding to change over following a plan”. Most agile methodologies use some form of iterative development to reduce uncertainty, with the idea being that if you’re working on smaller well defined chunks of a larger project uncertainty is removed and the project can slowly adapt to changing requirements. Mike Cohn wrote in an article titled “The Certainty of Uncertainty”.
“The best way to deal with uncertainty is to iterate. To reduce uncertainty about what the product should be, work in short iterations and show (or, ideally give) working software to users every few weeks. Uncertainty about how to develop the product is similarly reduced by iterating. For example, missing tasks can be added to plans, inadequate designs can be corrected sooner rather than later, bad estimates can be amended, and so on.”
If I take the above information together I can conclude two things. First, the agile method attempts to reduce or eliminate uncertainty by making every project a function of smaller work items iterated over time. Or framed in mathematical notation:
Where: T = Max Iterations, M = Backlog, N = User Stories belong to M
Secondly, if a research project is really just a project with maximum uncertainty then the same framework should apply. Only there would be an unbounded number of work items over an unbounded amount of time. Or framed in mathematical notation:
According to this logic a research project should actually work within an agile framework. We just need to figure out how to construct M (i.e. backlog) and how to bound M and T (i.e. number of iterations).
So what are reasonable user stories for a research project and why are they potentially infinite? It occurred to me that research in general follows the scientific method and that the scientific method may be a good framework for story generation.
In essence the scientific method can be boiled down to three phases: a research phase, an iterative hypothesis testing phase, and a communicate or productize phase. The unbounded component of research is that many hypotheses end in failure leading to another hypothesis that must be tested and this can potentially go on ad nauseam. This provided me a compelling framework for how to break research into user stories.
The first story in any research project correlates to the first phase in the scientific method. This story should be a time bounded spike that frames the initial question, covers any background research, and has an acceptance criteria of generating the required stories for the next phase of the project, hypothesis testing.
The next set of stories are all part of the hypothesis testing phase. These stories include any development work required to test the hypothesis, any data collection required, running the tests, and analyzing the results. If the hypothesis proves false the team should circle back to the background research phase and continue on with the process.
The final phase in this framework is only relevant when a hypothesis is proven to be true. This phase contains multiple stories including any communication or publishing of results, IP protection, and a handoff to whomever might be building the final product (which might be the same team). The final handoff story should also be a spike and the acceptance criteria should include the user stories required for the production deployment.
BOUNDING AN UNBOUNDED PROJECT
Now how do you go about making sure research stories don’t go on forever? How do you bound T and M? And how do you communicate the cost / value trade offs with management?
I have found that the previously described framework only works if you apply the following guidelines in conjunction. Specifically
- For any research project to be considered we must have enough information for the project to pass the “sniff test” (i.e. Is it possible in a reasonable amount of time and does it make business sense).
- The initial estimate for research projects are based on the expected number of hypothesis iterations and the cost must be inline with the expected project value (i.e. if the research is perceived to have large value it may be worth iterating for a long time).
- If the number of hypothesis iterations exceeds the original cost the cost/benefit analysis must be revisited and the project should be canceled if the cost has exceeded expected value.
What I have presented here is a process by which you can take an unbounded research project and place a structure around it that will work in companies using an agile development methodology. Besides allowing research projects to function in an agile organization this framework also provides a method for bounding research problems and communicating the cost / benefit trade offs to management and other relevant parties. For those who have faced similar issues integrating research oriented projects into an agile culture I hope this methodology provides some ideas on how you can better integrate research into your processes.