In this day and age of faster, better, harder, stronger, there is a lot to be said about quality in engineering practices. I am not discussing a Best Practices approach, nor am I wanting to delve into how complete code coverage percentages are. Simply put, let's put the "metrics" aside.
Programming and Quality are two different animals. They need to be incorporated at the earliest stage possible. If not, those shrinking deadlines will come at light speed and that ideal schedule will be out the door. The programming will be at a fevered pace and when the programmers lob the software over the QA side of the fence, it will be like they are throwing a grenade over and causing all kinds of havoc on the QA schedule.
You know exactly what I mean. That 3 month deliverable gets a budget of 8 weeks programming and 6 weeks QA. That means the 2 weeks of overlap in the schedule, the programmers and quality people have to do massive mind sharing that not a lot of anything will be accomplished except a huge kickoff for the QA effort.
So now, let's go in a different direction. When the functional requirements from Marketing hit the programming group, wouldn't it make sense to get QA involved? When the programmers are making plans to construct the beast, the QA effort would be in parallel and they could be getting the initial test plans done and getting ready to destruct the beast. This way, the test plans, environment, and knowledge would be much further along and in place before the software is "launched" over the wall like a grenade.
Eventually, the programming will hit a snag. It always does. Then the 8 weeks will now be 10 or 12 weeks. In the first scenario, that leaves the QA organization looking like the bad guys. Since the quality group had virtually had no time to test because of the time slip, we at least need to give it a "smoke" test to see if it will actually run. In my experience, that 6 weeks would be about 50 to 75 percent of the QA time frame that was first given. Remember, that the software was thrown over the wall and has to be tested relatively well. That would leave at least a 3-4 week slip on testing. You know, find bugs, get a patch, test to see that the bugs are fixed and no new ones are introduced. Then lather, rinse and repeat.
When the QA organization is involved from the earliest possible moment, we can ideally test in conjunction with the alpha and beta releases of the software. Catching bugs early and honing the QA automation and manual effort is key to developing a product that will have more quality in less time. There will be less (maybe significantly less) slippage of the ship date, if everyone is involved. I can't remember all the times that I was involved in the over the wall scenario. Then someone in the QA group would speak up near the end or AFTER the project was done and say if you did it *this* way, we could have used this tool or this package to test with and that would have reduced or eliminated the slip in the schedule.
So many organizations are focused on streamlining the process of programming and quality assurance as separate entities. The process of each can be honed to the last degree of their respective duties. But, when you involve both from the beginning, not only do you get the "buy-in" from both groups on the schedule, you also optimize the chance for success that you will deliver a quality product in less time.
Saturday, March 22, 2008
Subscribe to:
Posts (Atom)