The other day I did a little exercise and calculated how many bugs a day were created in my recent custom development projects. I compared two outsourced projects with two internally managed projects. I wanted to see how many bugs were identified by the end users as part of the acceptance testing and how many are found by the testers as part of the system testing. For the latter I did not have the details from the vendors for the outsourced projects.
There was a big difference between the internally managed and outsourced projects. Our end users hardly found any bugs during their acceptance testing for our internally managed projects. But the outsourced projects resulted in a total number of bugs found during user acceptance testing equivalent to the total number of bugs we found during our system testing for the internally managed projects. In other words, the vendor seemed to off-load the proper testing to us. Not happy.
The number of bugs are usually measured based upon lines of code (LoC). I calculated it by developer-day. There are many problems measuring bugs with either metric. Lines of codes measured as the total line-count in the set of source files does not mean much. You measure empty lines, comments and simply the layout style of the programmer. I know for example in C, you can have many lines with a single character while in many 4GL programs that would hardly occur.
Bugs per day have problems as well. First of all, my total day count was a rough estimate for our internally managed projects. I do not keep a tally for my internal developers. For the outsourced projects I used the project costs divided by the day rate that the vendor usually charges. But that is probably not the actual effort the vendor put in. So we should be careful about being too smart with these numbers.
For various other reasons, I do like to have a bit of an idea how effective developers are and how many bugs they leave in the code when they give it to the testers. And for us as a project team, how many bugs we leave in the system when we give it to the end users for acceptance testing. There are many aspects involved in programming, but in the end simply measuring the number of bugs per developer-day gives me a bit of an idea.
I think it is common knowledge that its cheaper to have a bug found and resolved by a developer than found by a tester and given back to the developer to fix. The same logic applies to bugs found by end users versus bugs found by the testing team.
I found that the number of bugs in general turned out to be between one to three bugs per day and specifically for our internal projects towards one bug per day.
The crucial insight in the exercise was that our internal development resulted in a much better service to the end users.
There are many reasons why this happened.
First of all, I have included the people involved in the requirements and analysis of the business process into the testing team. The Business Analyst and Service Delivery Manager who had first hand interaction with the business would find most of the flaws similarly as the end users would. Better even, because they really focussed on the testing while end users would do it much more superficially. For the outsourced projects, we created the requirements internally and provided these to the vendor to build. The vendor was in that sense disadvantaged.
Secondly, our testers were close to the business. In cases where they were uncertain about the right behaviour it was easier for them to consult the business. The vendor worked on a different location and this case on the other side of the globe and had with that another disadvantage.
I must conclude that the closer the development team is to the business, the more effective the development and testing will be. I think I already knew this ;).
For the future I will challenge my developers to create less than one bug per day.
For outsourced projects I will need to consider to put some metrics in the contract around the number of bugs.