In my previous blog, I talked about when to estimate user stories so that a Product Owner can do release planning based on velocity and relative estimates. This time, I will discuss another topic that I see many Scrum teams struggle with: how to implement improvements based on what is discussed in retrospectives.
Many Scrum teams have a hard time to continuously improve themselves. In retrospectives, problems and possible improvements are discussed. Then nothing happens. In later retrospectives, the same problems are discussed without noticeable changes. Retrospectives like this are a waste of time. Even worse, missing out on the opportunity to continuously improve is a big waste in itself. The velocity of such teams and the quality of their deliverables will almost certainly get better if they find ways to act on improvements that are identified in retrospectives.
These are my advises that I have found useful to implement improvements form retrospectives:
1. Collaborate with the Product Owner to add improvement items to the Product Backlog.
In a Scrum project, the Product Owner has to balance all stakeholders that want to use the team's capacity to achieve their goals. Stakeholders can be end-users, an operations department, a marketing department and so on, but the team itself is also a stakeholder. The work that a team identifies to improve itself can be prioritized by the Product Owner on the Product Backlog just like all other planned work for the team. It should be possible for the team to explain the advantages for the other stakeholders, or else it may not be such a good idea after all.
I have seen teams that identify significant amounts of work in retrospectives that would really pay off, but plan to do it next to the 'normal' work from the Product Backlog. If the team has an established velocity that the Product Owner expects to be continued, the team will just not find the time to do the 'extra' work. This typically happens to things like big refactorings, improvements to the continuous integration build and the introduction of automated functional tests. If such work is added to the Product Backlog, it will be estimated and prioritized together with the other planned work that the Product Owner wants the team to do. Once it is then picked up in a Sprint, the team will focus on it and get it done.
2. Maintain working agreements and update them as the outcome of retrospectives.
It's a good idea to be explicit about the things that the team has agreed upon. In his blog Team norming and chartering, Martin van Vliet describes how to establish such agreements. The resulting agreements should be prominently posted (on the wall or a Wiki) and be visible to everyone. These agreements provide the basis for effective team work and can be referred to in the case of disagreements. This is an important step in the evolution of 'just a group of people' to a team.
If you maintain working agreements, you can hold each other accountable and you can inspect and adapt to make improvements on them. A typical subject of working agreements is how to deal with bugs that are found in a sprint. For example: Are they entered in a bug tracking system or only put on the Scrum Board? How much context information is added to bug reports? Do bugs always have priority over other tasks on the Sprint Backlog? Another common subject of working agreements is which engineering practices the team will follow and to what extend: Will team members always do pair programming or only when they choose to do so? Will they always do test driven development The outcome of a retrospectives can be to change one or more working agreements.
3. Maintain a Definition of Done and reflect on it in retrospectives.
Many teams are not explicit in what their Definition of Done is. In a Definition of Done, a team specifies the general acceptance criteria for features that are delivered in a Sprint. For example: Should each feature be documented? Within which intervals should requests be handled? Should features be formally accepted by the business?
There are trade-offs in how ambitious a team can be in in its Definition of Done. A retrospective is a good time to reflect on this. For example, part of the Definition of Done of a team that I coached was that all features should be tested on several versions of several web browsers. When the set of browsers was defined for which testing should be done, the Product Owner had no idea how much work this would bring about. In a retrospective, it was identified that testing for all these browsers was the main bottleneck for getting features Done. The Product Owner then decided that tests needed only to be done for a smaller set of browsers, based on usage statistics. The Definition of Done was changed accordingly and the team's velocity increased significantly.
4. Maintain reports of retrospective outcomes and verify whether anticipated improvements took place.
My final advise is to inspect and adapt on the outcomes of retrospectives. You can never be sure whether a change actually results in the desired effect unless you verify it after the change has taken place. To be able to do this, keep track on the decisions that have been made in retrospectives, for example on a wiki. Then regularly look back on them to discuss results of changes to the velocity and quality of work. For some teams, a fixed item on the agenda of every retrospective is to reflect on the results of decisions from the previous retrospective.
I hope that these advises will help some ScrumMasters to facilitate retrospectives that lead to higher velocity and higher quality deliverables of their teams.