Kanban Estimation using Littles Law
Bogopo last edited by
I am new to Kanban, I am aware that Kanban is a flow system and estimation is optional. My stakeholders have requested the following:
- Provide a delivery date for the remaining items (WIP)?
- If the remaining items need to be completed within 5 weeks how many resources are needed?
I was planning to use Littles law to do the forecasting. Wondering whether my estimation and understanding are correct. If you are doing it in a different way, please advise
Throughput: 6 tasks/week Team size: 4 (full capacity) Work in progress: 50 (Remaining items to be delivered)
As per littles law Lead Time = WIP / Throughput
Lead Time = 50/6 = 8.3 weeks (rounded to 9 weeks)
So to answer question 1, With 4 resources at a throughput rate of 6 tasks/week, I would need 9 weeks to complete 50 tasks. Is this correct?
Let's move to question 2, in order to deliver the WIP within 5 weeks. I am using the below formula
9 weeks to deliver 50 WIP requires 4 resources 5 weeks to deliver 50 wip requires how many resources = 9/4 x 5 = 12 resources
Is this correct ?
I'm not going to address your math directly, because it's a solution for Y in an X/Y problem. Specifically, how long it takes to complete your current backlog should be a function of your current lead and cycle times, not an invitation to resize your queues or attempt to crash the project by adding additional resources.
What you really ought to do is determine whether your current process throughput can empty the process queue in the run-time remaining for the project. There are many ways to do that, and I'll demonstrate a few of them below.
Analysis and Recommendations
Your product backlog shouldn't be counted as "WIP." Work items remaining in your Product Backlog are backlog items. They don't become work-in-progress (WIP) items until someone begins actively working on them.
Your current cycle and lead times will generally provide a more accurate forecast than trying to retcon your queue sizes, available resources, or WIP limits at this late stage. Changing your process invalidates or distorts the historical averages of your metrics, and will therefore result in a much lower confidence interval for any predictions you make.
Additionally, adding resources to compress the project's schedule will rarely increase throughput at this late stage (see Brooks' Law). I'd also think your current cycle and lead times will be more accurate than trying to retcon your queue sizes.
Assuming that the size of your backlog items are all within the same order of magnitude, and that the work remaining fits the same process that prior work has used, then you could simply multiply your average lead time for backlog items to the number of backlog items remaining. For example:
days_remaining = 30 lead_time_in_days = 5 backlog_items = 50 # lead time is often a product of queue delay and system throughput backlog_items * lead_time_in_days #=> 250 # determine if all remaining work can fit into available runtime (backlog_items * lead_time_in_days) <= days_remaining #=> false
Lead time should be accounting for queue delays and implicit WIP limits already, so the math is pretty easy if you have historical values. However, if you don't have a good handle on your lead time, you could look at your cycle time and WIP limits instead. For example:
backlog_items = 50 days_remaining = 25 cycle_time_in_days = 1.2 wip_limit = 6 (backlog_items * cycle_time_in_days) / wip_limit #=> 10 # determine if all remaining work can fit into available runtime ((backlog_items * cycle_time_in_days) / wip_limit) <= days_remaining #=> true
You might also look at Takt time or cumulative flow rates, but in all cases you are really just trying to determine whether your existing process throughput can empty the Product Backlog queue within the runtime remaining. You might apply Little's Law to a stable system that has no finite time constraint, but in my opinion the theorem doesn't really help you answer the underlying capacity/scheduling question you actually have.
Little's Law might help you explain your lead times, but finding the empirical mean of your lead time is going to be more accurate and more useful in answering the business question. Changing any of the underlying parameters will reduce the reliability of your calculations.
Don't Abandon Empirical Data
As a rule of thumb, you should focus on using empirical values from your existing process to seek a sufficient confidence interval, and then either:
- trim scope to fit the schedule/budget, or
- extend the schedule/budget to fit the scope.
Re-engineering your team's process at this stage, or adding resources late in the schedule, is a project management anti-pattern. Last-minute changes probably won't provide you enough runway to apply empirical controls to your process, and appeals to mathematics will generally result in either false confidence or ersatz precision. That way lies sadness and despair.
Agile frameworks rely on empirical data and long-term averages, so that "tomorrow's weather" is a relatively high-confidence forecast based on observable (and consistently measured) results. Invalidating historical data turns reliable forecasts into arbitrary management targets. Don't do that!