Why Quality Matters

Author: John Arrizza

One of the core problems (when it comes to Quality) can be traced to a simple fact: Management doesn’t recognize the Quality Team’s activities as Iterative.

This is the root cause of alot of issues.

Want to know what they are?

Let’s begin with Medical Devices:

  1. The test protocol is written once, but is not executed concurrently with the development team’s activities. Then, at the end of the project, (no surprise), the test protocol is out of date. The protocol is revamped but that causes a bunch of questions
    • about the product design and implementation
    • about the requirements
    • about marketing targets and goals
    • about labeling
    • etc

The protocol changes drive requirements and therefore development changes which, in turn, causes protocol changes. See the tail chasing dilemma?

     2. The test protocol is executed and it shows fundamental design flaws in the current state of the product. There aren’t just bugs, but the current product architecture and/or design just can’t hit the requirements that are specified; or it’s missing some fundamental understanding of what the product should be able to do.

     3. The protocol is executed and it shows missing subsystem requirements and sometimes even product level requirements. Or there are requirements that are just plain wrong, or not quite correct, or irrelevant and can be deleted.

     4. The protocol is executed multiple times and longevity flaws in the design are uncovered, e.g. we once ran a treatment back-to-back for 2 solid days on on a device and we discovered that one part was rubbing up against another part, so a little pile of aluminum was found underneath that location.

     5. The protocol is executed once and it passes. Validation has begun and it fails miserably because the device simply can’t handle the wide(r) range of behaviors expected of it during validation.

     6. The protocol is executed once, a bug is found, a regression is performed. But the bug fix actually broke something else, i.e. regression analysis was inaccurate because there was a “hidden” dependency within the design/architecture.

Here’s the deal.

The Solution to the issues listed above, is to run the test protocols continually, to update them continually and to tech prerequisites. 

In other words, treat the development of the quality system (and its documentation) in the same way we currently treat development processes: iteratively. 

The same for Validation – run protocols in the same way: iteratively.

Want to improve your Quality Process across the board?  Follow the same approach.

Any gaps in the QMS process are exposed by doing a practice run of the formal quality process. Any gaps at that time will be glaringly obvious.

Fix and repeat.

Practice makes perfect (assuming appropriate fixes!).

Adding continual Validation test runs makes the particular protocol rock solid as well, and reduces ‘surprises’ during a formal run non-existent.

In the meantime, the development team is fixing any issues found immediately and the requirements team is continually tweaking those documents as well.

At the end of the process, the formal run of all the protocols is … boring.

The way it should be.

John Arrizza is a (software development lifecycle) Process Improvement consultant for Advantu, and supports our clients on an ongoing basis. His expertise helps client teams reduce costly rework and methodically shrink time-to-market, while dramatically increasing customer satisfaction with their products.

Want to verify your Quality Processes are Up to Best Practice Standards?

**Take a minute and request a Free Quality Process Assessment (form below)– you may be closer than you think**

Share

You may also like...