How should I account for automated steps in Kanban board?



  • This is a small question to understand Kanban better. Right now, I am managing a Scrum team and decided to learn about Kanban a bit more to understand differences and (if needed) to choose the better development model for my team. The example is based on the real project.

    Imagine I have a Software project that is working with Kanban and Continuous Integration. The common task flow in this project is the following:

    1. Backlog - tasks are prioritized and sorted
    2. Implementing - developer is working on the task
    3. Reviewing - task is implemented and the code is under review before merging to the main branch.
    4. Merging - this is an automated step after approved review step. The application build is running on CI server, then auto-tests/unit-tests are running to ensure that nothing is broken. This step can take a while (4-8 hours per change). If everything is okay the change is merged. If it fails, the task should be returned to "Reviewing" step.
    5. Verification - once the change is merged to the main branch another developer should verify that the implementation complies with the "Definition of Done"
    6. Done - all the tasks that were implemented and verified. After that, the tasks are going to be pulled by SQA team for validation, writing test-cases and auto-tests. Due to Kanban limitations (WIP and switching contexts), developers cannot start pulling other tickets while there are tickets that are being merged. It is a waste of 0.5-1 developer day. If there will be no WIP on "Merging" step, then the WIP will be violated on "Verification" step once "Merging" step is done. I cannot throw away the automated step as well, since it is a part of the flow.

    So, the questions are:

    • a. How should I account for automated steps in Kanban board?
    • b. What if these steps fail (Kanban discourages moving tickets backward)?
    • c. How would you (re-)organize this flow?

    Bonus question:

    • d. How can we enforce through Kanban the rule that different developer should process "Verification" step?


  • I don't think the problem is the automated steps, but the overall process flow.

    A few things to consider:

    It's not entirely clear on the differences between "Reviewing" and "Verification". It seems like you are developing in branches, so why can't the code review process account for ensuring that the changes satisfy the team's Definition of Done? It seems like in both steps, another developer is looking at the work. By combining these two steps and making sure the work satisfies the Definition of Done before merging, you're eliminating a handoff. Instead of coder -> reviewer -> CI tool -> reviewer, you have coder -> reviewer -> CI tool.

    Why does the merging process take 4-8 hours? If you're truly practicing continuous integration, the tests should run in minutes. There are a few things to think about. If you have a lot of very focused tests, consider making your tests more sociable and reducing the overall number of tests - https://martinfowler.com/bliki/UnitTest.html . Reducing external dependencies may also speed up tests. If you're running the whole test suite, consider running more target tests that focus on the changes or on high-risk pieces of the system to get the test feedback faster. At a running time of 4-8 hours, consider running your full test suite on the main branch either after a merge or every night, depending on how frequently you are merging into the main branch, to get feedback to the developers.

    Delaying writing test cases until so late in the process seems inefficient. The isolated SQA team is also probably unnecessary, outside of a handful of cases involving critical software or regulatory processes. If you do need an independent SQA team, they don't need to wait until after the merge is complete to start. They can begin writing their test cases and getting ready as soon as the requirement or change is understood - often, this is in parallel with development.

    Consider "Ready for X" steps that do not have WIP limits or have more appropriate WIP limits. For example, after "Implementing", you can have a "Ready for Review" column or after "Merging", have a "Ready for Verification". You may find that it's best to have a low WIP limit in Implementing, but a higher or no limit on what is in Ready for Review. Counting time-in-stage, along with work item aging and cycle time will help you identify bottlenecks in the process. If things sit in a "Ready for X" column for a while, you can inspect your workflow for find how to get things through the following stage faster so people can pull work from Ready.

    As far as moving things backwards, I don't think that's a concern, especially in a software project. Kanban originally came from a manufacturing process where it was much more difficult, if not impossible, to move things backwards. If an error was made, it may have rendered the part obsolete. Alternatively, the part would have to go into a different process to be made viable for input again. None of this is truly applicable in a software development context, so feel free to move cards backwards one or more steps as necessary. Having "Ready for X" or "Waiting for X" columns can help facilitate this by giving a placeholder for rejected work.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2