Scrum, almost by the Book

Not long ago, I led a couple of geographically distributed software engineering teams, with a considerable amount of juniors, as technical director and product owner. Together with my Scrum master, we wanted to do Scrum right!

Disclaimer

Scrum works best when you have multiple people working on the same product towards one common goal with interdependent tasks. If this is not the case, you should not use Scrum. (I've seen it really backfire in art teams where people worked independently on separate assets).

Our Goals

We both felt strongly about value creation. Businesses don't care how many lines of code (LoC) or tickets you close - they care about tangible value you create - time, quality, staff-required, etc.. On top of that, we also wanted to foster a productive, open and innovative environment for our developers to grow.

Team Contract

We made a short demo - what is Scrum, and what it means to us - and get buy-in from the team. For this, we only made one rule: Let's try something first, and if we don't like it, we change it! We would keep this rule for everything and then review in our retrospectives.

To get the most out of adopting Scrum, Scrum had to work for everyone. Team input, right from the start is crucial.

Why not Kanban?

The most important thing for us is rhythm, because...

A problem with Kanban is that it makes no effort to keep things predictable other than the flow of work. There is the danger that the team can lose goals, focus and commitment when there are no checkpoints, such as reviews and retrospectives.

We wanted to have a tight process, that we're the masters of, which helps us to ensure we're always on top of our game.

Scrum Roles

Our teams were around 7 - 12 people strong and multiple members shared responsibilities. As a game studio, we also had a technical director (i.e. myself) involved, who would formally be in charge of development. The teams were geographically distributed - we did almost all our work over Zoom and MS Teams.

Roles:

Together the Technical Director, Product Owner and Scrum Master would ensure the team always has everything they need to work, be it software, hardware or know-how about tech/product/process. They would also coach the team towards independence.

Usually, the roles of TD, PM and PO would be shared.

Our Weekly Schedule

We settled on two week sprints for smaller projects with more junior teams, and 3 weeks for longer projects with more senior teams.

WeekDayEvent
1MondaySprint Planning
1TuesdayDaily, Release
1WednesdayDaily, Stakeholder Meetings
1ThursdayDaily
1FridayDaily, Weekly Recap
2MondayDaily
2TuesdayDaily, Playtest
2WednesdayDaily, Backlog Grooming, Risk List Update
2ThursdayDaily, Sprint Planning Prep
2FridayDaily, Sprint Review, Sprint Retrospective

Events

With the exceptions of Sprint Planning, Review, Retrospective and the Dailies, most events are flexible. But ideally, events should happen at the same days to create a predictable rhythm.

Sprint Planning

Based on previous Stakeholder Meetings, Backlog Grooming, Risk List Update and Sprint Planning Prep the PO and Technical Director quickly summarize the state of the project and the current priorities for the sprint.

The team discusses which items from the backlog to take into this sprint. The role of the PO and Technical Director is crucial here: they brief the team on the requirements and technical challenges for each backlog item and answer the team's questions. Only when the team is clear on the approach and outcomes will it accept a backlog item. If it is not clear, the PO and TD will find the answers until the next sprint.

In addition to features, the team will also pick a certain number of bugs.

The team then further breaks down the items, with the TDs and POs help. It answers questions such as:

Once outcomes and the steps to reach them are clear, the team estimates time for each task. The SM helps the team with estimation to ensure that each sprint has enough buffer based on known risks. This ensures enough time for testing, evaluation and bug-fixing exists.

We always tried to formulate sprint goals. They are a red-thread that unites the backlog items the team picked. It helps to unite the team's efforts and increases ownership.

Release

Optional: Builds that have been approved in the previous Sprint Review are pushed to production.

Not releasing immediately after an item is finished, and sticking to a schedule has advantages:

Only critical bug fixes would trigger releases outside the schedule.

Stakeholder Meetings

A regular check-in with Stakeholders to discuss:

Weekly Recap

A quick health check - no more than 15 minutes - where the team summarizes their week. This allows us to spot issues early and course-correct in the following week.

Playtest (aka Dogfooding)

Many game studios have regular play-tests, where the entire project team evaluates the state of their work and aligns on expectations and priorities.

We decided to do the same for our own work. We would trial key workflows (albeit much less skilled than our artists!) to see how robust and efficient they were. Or if we'd run into problems that would require follow up tasks or research.

We included everyone in the team. This creates alignment and allows to make connections between seemingly disconnected issues. (e.g. we once found that our back-end network code introduced lag to the GUI due data no being ready in time!)

The playtest doesn't just aim to find bugs - even though we had a lot of fun trying to break our own product! It also serves as a readiness check for the Sprint Review. The next days will be used to polish and finish work the team deems ready.

Backlog Grooming

Here the PO and TD re-order the backlog items as informed by our stakeholders. We rank items by value in two lists:

Tip: treating bugs as negative value is useful so bugs don't get included in project velocity. Velocity should measure how much value is newly created. Fixing something broken doesn't create new value - you're just catching up because you fell behind.

Backlog Grooming considers the following to determine an item's value to the project:

We would also go through the pre-backlog (or "in-box"). This is a backlog where feature requests and ideas can be dumped in without lengthy process. We want to make it easy for everyone to pitch ideas for innovation.

We would then evaluate the pre-backlog items against the product vision, and technical capabilities. We would, for each item, either reject, move to a different project, wait (until we have capabilities) or accept it. With acceptance the item would move to the backlog proper.

Finally, we would ensure that there's a theme to the upcoming sprint. Ideally the backlog tasks were releated and fed into each other for a result that's bigger than its parts. We can then craft the sprint goal around this.

Risk List Update

Here we would update our known risks to time, personnel, budget, quality and technical execution. We think in terms of likelihood and impact. Then we update our response (accept, mitigate, insure, avoid).

Sprint Planning Prep

The PO and TD ensure they have all necessary knowledge about the top items in the backlog so they can prepare to brief the team in the next Sprint Planning, answer all necessary questions, and ensure the team is confident about their work.

This step may include technical research, presenting findings from stakeholders, researching useful background info for the team (e.g. research papers, references, training, etc.).

Sprint Review

Here we present the product to our stakeholders. This is the ultimate acceptance test. Whenever possible, we would demo the actual product and give stakeholders a chance to try it themselves. The goal is to build trust and confidence with our stakeholders. Therefore, we only showed completed features that we are confident to release.

After the Sprint Review, the PO can choose to release the product, if the product was received well. We never release for "the sake of it". Releases must always build trust in our product, and the development team. We use this goodwill if we ever have to delay a product or need support (e.g. for testing and RnD) from our users.

Sprint Retrospective

This even it all about the team. How do we work, how do we work together, how do we work with stakeholders? Are our tools and processes adequate? What can we improve? The goal here is to make us work better together to improve how we deliver value.

The key is to keep the retrospective blame free and forward looking. The past only concerns us in how it can guide us to do better in the future!

Dailies

Next to the typical stand-up questions, we would also address the following:

Definition of Done (DoD)

We had a simple method of growing our definition of done: whenever a developer said "it's done, but...", we would add the "but" to the DoD, such as:

When it's time to close tasks, we want to be assured, given on the information we have at the time, that all work is finished and the task is unlikely to come back and haunt us later.

Feedback

A tricky part is to know whom to listen. To get a balanced picture we'd have to make sure to hear from users (people who use the product), customers (who pays for our dev time) and stakeholders (who have an interest but may be neither users or customers).

Our methods:

Lessons

When I quit my job at Virtus and moved to Singapore, the team members I had in Singapore invited me to dinner and to celebrate our time as a team. I think, we did Scrum right - in a way that actually worked for developers and stakeholders alike!

Here are the most important takeaways:

Tags: #management#scrum#agile#software engineering