Scrum, almost by the Book
Not long ago, I led a couple of geographically distributed software engineering teams, with a considerable amount of juniors, as technical director and product owner. Together with my Scrum master, we wanted to do Scrum right!
Disclaimer
Scrum works best when you have multiple people working on the same product towards one common goal with interdependent tasks. If this is not the case, you should not use Scrum. (I've seen it really backfire in art teams where people worked independently on separate assets).
Our Goals
We both felt strongly about value creation. Businesses don't care how many lines of code (LoC) or tickets you close - they care about tangible value you create - time, quality, staff-required, etc.. On top of that, we also wanted to foster a productive, open and innovative environment for our developers to grow.
- Measure value creation rather than tickets close or other potenitally game-able metrics.
- Care strongly about users - their experience, productivity, happines. Motivated users create strong art and go the extra mile!
- Care about sustainable development - i.e. a robust code base that lasts a long time, makes onboarding easy, and promotes developer efficiency and happiness.
- Foster continuous improvement and level up our juniors and everyone else.
- Deliver results regularly by having a good process and rhythm.
- Create an open, innovative and safe environment that promotes collaboration and sharing.
Team Contract
We made a short demo - what is Scrum, and what it means to us - and get buy-in from the team. For this, we only made one rule: Let's try something first, and if we don't like it, we change it! We would keep this rule for everything and then review in our retrospectives.
To get the most out of adopting Scrum, Scrum had to work for everyone. Team input, right from the start is crucial.
Why not Kanban?
The most important thing for us is rhythm, because...
- We want to make sure we can release frequently, if we want to. This means we have to regularly stabilize our code, write documentation, and do everything in the definition of done (DoD). i.e. it forced us to tackle any kind of debt regularly. Debt should never overwhelm us and impede our velocity.
- We want to make sure the team is heard regularly: to bring ideas and improvements to our work and our process.
- We want to make sure we speak to stakeholders regularly, get feedback regularly, etc.
A problem with Kanban is that it makes no effort to keep things predictable other than the flow of work. There is the danger that the team can lose goals, focus and commitment when there are no checkpoints, such as reviews and retrospectives.
We wanted to have a tight process, that we're the masters of, which helps us to ensure we're always on top of our game.
Scrum Roles
Our teams were around 7 - 12 people strong and multiple members shared responsibilities. As a game studio, we also had a technical director (i.e. myself) involved, who would formally be in charge of development. The teams were geographically distributed - we did almost all our work over Zoom and MS Teams.
Roles:
- Product Owner (PO) - responsible for vision, features and value delivery.
- Scrum Master - a moderator and someone who ensures processes are smooth.
- Technical Director (TD) - responsible for technical execution.
- Development team, including members with different specializations, such as rendering, UX research, GUI programming, generalist programmer, QA engineering, build engineering, technical writing, technical art and machine learning (ML).
Together the Technical Director, Product Owner and Scrum Master would ensure the team always has everything they need to work, be it software, hardware or know-how about tech/product/process. They would also coach the team towards independence.
Usually, the roles of TD, PM and PO would be shared.
Our Weekly Schedule
We settled on two week sprints for smaller projects with more junior teams, and 3 weeks for longer projects with more senior teams.
Week | Day | Event |
---|---|---|
1 | Monday | Sprint Planning |
1 | Tuesday | Daily, Release |
1 | Wednesday | Daily, Stakeholder Meetings |
1 | Thursday | Daily |
1 | Friday | Daily, Weekly Recap |
2 | Monday | Daily |
2 | Tuesday | Daily, Playtest |
2 | Wednesday | Daily, Backlog Grooming, Risk List Update |
2 | Thursday | Daily, Sprint Planning Prep |
2 | Friday | Daily, Sprint Review, Sprint Retrospective |
Events
With the exceptions of Sprint Planning, Review, Retrospective and the Dailies, most events are flexible. But ideally, events should happen at the same days to create a predictable rhythm.
Sprint Planning
Based on previous Stakeholder Meetings, Backlog Grooming, Risk List Update and Sprint Planning Prep the PO and Technical Director quickly summarize the state of the project and the current priorities for the sprint.
The team discusses which items from the backlog to take into this sprint. The role of the PO and Technical Director is crucial here: they brief the team on the requirements and technical challenges for each backlog item and answer the team's questions. Only when the team is clear on the approach and outcomes will it accept a backlog item. If it is not clear, the PO and TD will find the answers until the next sprint.
In addition to features, the team will also pick a certain number of bugs.
The team then further breaks down the items, with the TDs and POs help. It answers questions such as:
- what sort of testing is needed and to what extent (we never release untested code!).
- is UX research needed.
- what kinds of documentation is needed.
- what risks and opportunities exist.
Once outcomes and the steps to reach them are clear, the team estimates time for each task. The SM helps the team with estimation to ensure that each sprint has enough buffer based on known risks. This ensures enough time for testing, evaluation and bug-fixing exists.
We always tried to formulate sprint goals. They are a red-thread that unites the backlog items the team picked. It helps to unite the team's efforts and increases ownership.
Release
Optional: Builds that have been approved in the previous Sprint Review are pushed to production.
Not releasing immediately after an item is finished, and sticking to a schedule has advantages:
- plenty of time for testing - having a broken feature early is worse than having a well-working feature later.
- we can time releases to ensure they're not disrupting an already busy project and users can absorb the changes.
Only critical bug fixes would trigger releases outside the schedule.
Stakeholder Meetings
A regular check-in with Stakeholders to discuss:
- changes to priorities and requirements.
- feedback for the previous release.
- how well the current product meets production needs and creates value.
- feedback on prototypes and research.
Weekly Recap
A quick health check - no more than 15 minutes - where the team summarizes their week. This allows us to spot issues early and course-correct in the following week.
Playtest (aka Dogfooding)
Many game studios have regular play-tests, where the entire project team evaluates the state of their work and aligns on expectations and priorities.
We decided to do the same for our own work. We would trial key workflows (albeit much less skilled than our artists!) to see how robust and efficient they were. Or if we'd run into problems that would require follow up tasks or research.
We included everyone in the team. This creates alignment and allows to make connections between seemingly disconnected issues. (e.g. we once found that our back-end network code introduced lag to the GUI due data no being ready in time!)
The playtest doesn't just aim to find bugs - even though we had a lot of fun trying to break our own product! It also serves as a readiness check for the Sprint Review. The next days will be used to polish and finish work the team deems ready.
Backlog Grooming
Here the PO and TD re-order the backlog items as informed by our stakeholders. We rank items by value in two lists:
- the feature backlog - tasks which add value to the product.
- the bug backlog - tasks which represent negative value that we have to make up.
Tip: treating bugs as negative value is useful so bugs don't get included in project velocity. Velocity should measure how much value is newly created. Fixing something broken doesn't create new value - you're just catching up because you fell behind.
Backlog Grooming considers the following to determine an item's value to the project:
- impact - i.e. impact on what stakeholders value: efficiency, quality, time savings, etc.
- urgency - it can make sense to keep stakeholders happy to prioritize what they deem important to create goodwill.
- risk - tasks which allow us to make further development less risky and improve our ability to deliver value.
- technical dependencies
We would also go through the pre-backlog (or "in-box"). This is a backlog where feature requests and ideas can be dumped in without lengthy process. We want to make it easy for everyone to pitch ideas for innovation.
We would then evaluate the pre-backlog items against the product vision, and technical capabilities. We would, for each item, either reject, move to a different project, wait (until we have capabilities) or accept it. With acceptance the item would move to the backlog proper.
Finally, we would ensure that there's a theme to the upcoming sprint. Ideally the backlog tasks were releated and fed into each other for a result that's bigger than its parts. We can then craft the sprint goal around this.
Risk List Update
Here we would update our known risks to time, personnel, budget, quality and technical execution. We think in terms of likelihood and impact. Then we update our response (accept, mitigate, insure, avoid).
Sprint Planning Prep
The PO and TD ensure they have all necessary knowledge about the top items in the backlog so they can prepare to brief the team in the next Sprint Planning, answer all necessary questions, and ensure the team is confident about their work.
This step may include technical research, presenting findings from stakeholders, researching useful background info for the team (e.g. research papers, references, training, etc.).
Sprint Review
Here we present the product to our stakeholders. This is the ultimate acceptance test. Whenever possible, we would demo the actual product and give stakeholders a chance to try it themselves. The goal is to build trust and confidence with our stakeholders. Therefore, we only showed completed features that we are confident to release.
After the Sprint Review, the PO can choose to release the product, if the product was received well. We never release for "the sake of it". Releases must always build trust in our product, and the development team. We use this goodwill if we ever have to delay a product or need support (e.g. for testing and RnD) from our users.
Sprint Retrospective
This even it all about the team. How do we work, how do we work together, how do we work with stakeholders? Are our tools and processes adequate? What can we improve? The goal here is to make us work better together to improve how we deliver value.
The key is to keep the retrospective blame free and forward looking. The past only concerns us in how it can guide us to do better in the future!
Dailies
Next to the typical stand-up questions, we would also address the following:
- current level of confidence in reaching the sprint goals.
- any emerging risks we're aware of.
- successes: what finished earlier than expected and other good news!
Definition of Done (DoD)
We had a simple method of growing our definition of done: whenever a developer said "it's done, but...", we would add the "but" to the DoD, such as:
- clean up code
- merge code
- document code
- update user/developer docs
- testing (developer testing, QA testing, etc.)
- UI polish and localization
- task related administrative
When it's time to close tasks, we want to be assured, given on the information we have at the time, that all work is finished and the task is unlikely to come back and haunt us later.
Feedback
A tricky part is to know whom to listen. To get a balanced picture we'd have to make sure to hear from users (people who use the product), customers (who pays for our dev time) and stakeholders (who have an interest but may be neither users or customers).
Our methods:
- regular stakeholders meetings, which include a representative sample of the user-base.
- telemetry/analytics - who uses the product, how, when, where, which features, how stable is it, how performant is it, etc.?
- surveys - e.g. NPV, qualitative and quantitative.
- AMA (ask me anything) sessions with the dev team and MS Teams support channels.
- Newsletters and Confluence feature pages.
Lessons
When I quit my job at Virtus and moved to Singapore, the team members I had in Singapore invited me to dinner and to celebrate our time as a team. I think, we did Scrum right - in a way that actually worked for developers and stakeholders alike!
Here are the most important takeaways:
- Goals: having Sprint Goals unites the team and gives it a common purpose. There is a lot more pride in reaching a good sprint goal than completing a set of disjointed tasks.
- Ownership: people want to have responsibility and independence. They want to say "I did this" at the end of the project, pointing at some bigger discernable feature. Ownership is a strong motivator.
- Rhythm: predictability and management of risk replaces the distraction of firefighting. There is a flow to things, but it is not distracting, yet everything is taken care of. Each sprint ensures you address everything that is important.
- Team-first: whatever you do, include the team. Increasing visibility and feedback increases agency, motivation, involvement, independence and professional growth of all team members.
- Every release builds trust with users - in the product and your team. The team will also trust itself more if you do that. Don't ship bad software!
- Talk to your users - you need to hear from the people who actually use the product. Talk to them and start a conversation, ideally, face to face!
- Start slow and accelerate (vs. start fast and let tech debt slow you down forever)
- Ship, ship, ship: always being ready to release does wonders for the quality of your software, documentation and user satisfaction. And what about the overheads? As the US Navy SEALs say: slow is smooth, and smooth is fast!