Over the past years, the topic of quality assurance (QA) has been a steady companion and although I might not necessarily like QA on f, there’s no doubt that thinking of negatively influencing variables of a working process is strengthening the awareness of potential issues and therefore still earns a +1 on g. Developing and implementing standard operating procedures (SOP) is then the next step after the detailed risk assessment of a process and aims to standardize the findings of the expert, who is aware of all influencing variables and context-specific dependencies. And by fixing this approach into a written SOP, it can be ensured that the results obtained by following this procedure are consistent and don’t require the expert’s knowledge. Despite this rather short and simplified description of QA, it is probably common-sense that the benefit of eliminating (or at least consistently managing) critical issues of the process generally increases the quality of the results.
So far, so good. Sound a like a compelling thing to do, but as often, the devil sticks in the details. As you may have noticed, there is this initial assumption that the expert who analyzes the process and develops the SOP will know what he’s talking about. Well, when considering the “experts” that share their view with us on TV, it is fairly obvious that the QA system should not rely on the notion, but rather the qualification of an expert in order to end up with a SOP that actually improves the quality. Therefore, any QA system requires to have procedures implemented that regulate the necessary qualification level for a the development, update and use of a given procedure. However, how would you qualify then the person that qualifies the expert… A QA system is by definition never complete as there are always valid arguments to continue to another level like a Matryoshka_doll. Or if you think of a QA system as an optimization problem in engineering, then the ideal solution would probably look something like the famous stairs of M. C. Escher:
Anyways, QA is still a +1 in my view, but when implementing a system it requires to take decisions when and where to stop and this is mainly the task of you as the expert analyzing a process and the open “ends” of the system will always have to be negotiated with your auditor and depend on the type of QA system you would like to implement. Another important topic following those lines is that a QA system is mainly focusing on whether a process is defined in an SOP by a qualified person, however the QA system will not conclude whether the qualified expert did correctly identify the risks. This might be a bit confusing, but as it mainly ensures a consistent process, a QA system can still be totally valid while the process itself produces (consistently) wrong values.
Well after discussing some of the basic ideas of QA, let’s have a look at microCT and QA. From a QA point of view, the microCT workflow can be divided into a biological part, covering the storage and treatment of a biological sample, a technical part where a physical experiment provides a digital representation of the biological sample, and the image analysis part where the acquired digital image data is processed to quantitatively characterize the structure of specific features within the sample (number crunching on the scanned image data). From our experience, the biological and the technical measurement parts in a QA system are straight-forward to implement. There have to be a set of SOPs to ensure proper handling and storage of samples (procedure to avoid potential mixing of samples and potential damage due to wrong storage conditions) and to ensure that the device used for the measurement is properly maintained and calibrated (most manufacturers provide a phantom for that purpose). When saying straight-forward, I don’t mean that this is quick thing to do, but the path to follow is clear and as discussed above it is up you (or the auditor) to define how far and detailed these SOPs need to be in your specific situation. However the part of the image analysis raises some questions that might need a closer look and has the potential to create some real headache when implementing a QA system: Since the actual measured variable in a microCT scan is the attenuation that x-rays encounter while penetrating a specimen, the signal will depend on the scanner properties like x-ray source, mechanics and detector system as well as on the material properties and orientation of the sample. Therefore each microCT scan of a sample has a unique footprint and if any of the above mentioned factors change, the image data of the same sample will vary and might require to adapt the analysis chain in order to get to the “correct” result. So either you follow a standard procedure that might result in “wrong” results, while satisfying QA, or you change the procedure in order to get to the “right” results, while violating basic rules of QA… So what should we do?
The best of all answers: It depends… if you’re curious on what, then stay tuned to the next post in this series.