Industry standards in Quality Objective values
Can anyone tell me what is the industry standards for the following values in a IT industry? Average errors per man-day (count) Delivery slippage per release (days) Estimation variation (days)
I'm going to start with the flippant answers before I give you the real ones. The flip answers are no less correct, they're just not something you're likely to want to hear. There is no standard because it's a meaningless metric. There is no standard because it's a meaningless metric. There is no standard because it's a meaningless metric. Now for the more detailed explanation: Average errors per man-day To start with, what is the definition of "error"? Is it bugs reported by customers? Bugs reported by testers? Developer typos that never see compilation? Even if you nail down your definition to something like "bug reports determined by triage team to actually be bugs", you still can't get a meaningful average per anything from that number, for these reasons: Every project is different. Small discrete changes typically generate few bugs compared to large structural changes. Every organization is different. You can't compare bug counts from, say, Microsoft against bug counts from a three-man startup that builds social games, and you can't compare either of those to company that builds embedded medical device software. Every organization has its own fault-tolerance level, and often there are varying fault-tolerance levels within an organization and within a single software application. Not everything that gets "fixed" is a bug. Sometimes it's simply something a large customer dislikes so it gets changed as a bug fix for goodwill purposes. Delivery slippage per release I suspect I'm not the only industry professional here trying not to laugh cynically. At my current workplace releases are almost never delivered late - because they aren't decided until the release content is tested and cleared for release. At my last job, releases were sometimes late, but largely because there was at least a week before each one of "heroic" effort (late nights, weekends, and so forth) and/or the release suddenly became a "beta" (aka it's not good enough but we have a contractual obligation to provide something). In addition, every release is different. A small, bug fixes only release is less likely to have schedule slippage than a large release with multiple new features. An organization that operates on a continuous release cycle (many web applications do this now) is rarely if ever going to miss release. Some of the big web applications the release and test cycles are performed on the same codebase, live - they have an access level flag on every feature which they flip from testing access to public access (I don't have examples - sorry). Estimation variation Again, every project is different. A small, discrete change in a well-known area of a well-designed application can usually be estimated with reasonable accuracy. Anything involving new functionality? There's too much that people don't know they don't know. If you've been asked this by a teacher or professor, shame on them - but you may need to give them the answer they expect in order to pass (in which case, ask them). If not, here's a little more information: Software development and testing are gnarly problems - they do not have a single, accepted correct answer. Any software development project is a problem that can have multiple solutions that will fit the purpose, each of them with advantages and disadvantages. Testers can take a number of different approaches and reach the same end result. Often what constitutes a successful software development project is decided by completely non-technical factors that the development and test team don't touch. There is no industry standard because the software/IT industry is not standardizable. No matter how you slice and dice data and categories, you're still never going to get away from the fact that there will be developers and testers who are industry superstars (go to Stack Overflow and search for Jon Skeet - or better still, go to the main stack exchange meta and search for the Jon Skeet topic). There will be others who are solid, competent workers. There will be erratic geniuses who are utterly brilliant when they get it right and devastating when they don't. And there will be people who shouldn't be in the position they hold, managers who don't understand what they're trying to manage, and every possible human resource disaster imaginable. Sometimes all in the one company. As long as software development and testing sit somewhere between skilled trade, art, and craft, they will remain fundamentally unmeasurable - which I think is likely to be forever because we are making things for people, and people are not and never will be interchangeable widgets.