Modus Operandi

Team Guiding Principles

Communication

  • Open, honest communication
  • Over-communicate
  • Quick response among team to questions
  • Question authority when it doesn't make sense
  • Active discussion and debate

Teamwork

  • Be inclusive of all contributors, in and out of HP
  • We are a team, not individuals

Iteration-based development and agile-intent

  • Two week iterations
  • Better to fail early

Processes

  • Keep the build and tests growing and running clean
  • Fixing bugs is always a priority that is part of regular team activity every iteration. Whenever anyone has availability, the bug list is evaluated, and bugs are fixed/tested appropriately
  • Generally, team development should work off trunk
  • Code and tests are owned by the whole team for collective ownership

Iterative Development

The team develops in an iterative fashion per agile principles. Each iteration uses the following rules:

  • The iteration is two weeks long, starting on Tuesday morning and ending Monday after next at the regular team meeting
  • There is a team meeting on the middle Monday to gather updates
  • Locally, there is a daily standup meeting to quickly communicate status and discuss blocking issues
  • All outstanding work that needs implemented is tracked on the Backlog and prioritized for each iteration
  • Each iteration is self-contained, that is, all work needed for that iteration is begun and finished within the same iteration (if tasks are longer than this iteration, they are broken down into separate, independent tasks that meet this criteria)
  • The goal of each iteration is to have a "shippable" unit of product that has been added during this iteration and passes all automated and additional QA tests

Iteration Execution Schedule

Each team iteration follows the same time-boxed pattern:

Week Monday Tuesday Wednesday Thursday Friday
1 Iteration kickoff at the team meeting (second half) (Tuesday in China)
- The team discusses priorities and preferable tasks for the next iteration
Iteration finalization and estimating
- Everyone finalizes both the list of tasks and the estimates for each story to complete in the iteration
Iteration execution
- The team completes the planned stories per the definition of "done"
- The automation should be kept running to support the team's ability to add new stories continually
.
Iteration test readiness
- The test team plans tests for the new stories and executes them as functionality is available
Iteration execution
.
Iteration test readiness
Iteration execution
.
Iteration test readiness
2 Iteration execution
.
Iteration test readiness
Iteration execution
.
Iteration test readiness
Iteration execution
.
Iteration test readiness
Iteration execution
.
Iteration test readiness
Iteration execution
.
Tasks and bugs get closed out per the validation of "done"
3 Tasks and bugs get closed out per the validation of "done"
.
Wrap-up at the team meeting (first half) (Tuesday in China)
- The team reviews the completed and incompleted work and evaluates the iteration via a quick retrospective
.
Iteration kickoff at the team meeting (second half) (Tuesday in China)
Iteration finalization and estimating
- Everyone finalizes both the list of tasks and the estimates for each story to complete in the iteration
Iteration execution
- The team completes the planned stories per the definition of "done"
- The automation should be kept running to support the team's ability to add new stories continually
.
Iteration test readiness
- The test team plans tests for the new stories and executes them as functionality is available
Iteration execution
.
Iteration test readiness
Iteration execution
.
Iteration test readiness
4 Iteration execution
.
Iteration test readiness
Iteration execution
.
Iteration test readiness
Iteration execution
.
Iteration test readiness
Iteration execution
.
Iteration test readiness
Iteration execution
.
Tasks and bugs get closed out per the validation of "done"
...

Task Estimation

The team estimates with story points, and as such, requires that there are reference tasks to calibrate the size of work done over time. To help that, here is a draft reference list.

  • For a tiny task (for example, a typo or one line of configuration): 1 point
  • A simple task (plugin UI with unit tests, adding some automated tests, fixing a simple defect, etc): 2 points
  • A medium task (changing the DB schema, fixing a more complicated defect, adding a new suite of automated tests, etc): 4 points
  • A hard, complex task that can be done in an iteration: 8 points
  • A very large or hard, complex task that must be decomposed into something smaller for an iteration: 10+ points

Meaning of Priority states

low Probably won't be done in the target release
normal Opportunistic in the target release
high Must be done in the target release
urgent Must be done ASAP, issue is breaking build/test process
immediate not used

Definition of "Done"

Getting to a common definition of what "done" means for a task/bug is important for both team understanding and meeting team commitments of truly being "done" at the end of an iteration. In order to execute an iteration and implement a story, the following definition governs when a task is "done":

  • The task/bug has been implemented
  • There are associated unit and functional tests
  • Everything gets checked in and the build, tests, and installation work with the next automation run
  • The task/bug has been marked as "resolved" and validated/tested appropriately by the right person (see below)
  • The end-to-end integrated system always works, even as new development occurs (QA will validate it)

If a task/bug is too complex to be done within one iteration (i.e. is too many points), the task/bug needs to be broken down into smaller pieces for implementation during this and subsequent iterations. In iteration planning, only what can be done in this iteration is considered and focus is exerted to getting that slice done. At all times, the team delivers valuable end-to-end user value, not just features.

Closing Out Resolved Tasks

Resolved defects and tasks should be re-opened or closed at the end of each iteration; there should be no danglers from iteration to iteration. The process for closing out bugs and tasks is similar. Upon resolving, the owner should contact the task/bug submitter to inform them that the task has been resolved and to validate it (that way, the submitter doesn't have to keep pinging the dynamic list). QA lead will see that each task owner closes their tasks before the end of the iteration. Otherwise, the task is bumped to next iteration.