FAQs: Quality Control

 

Every step of the laboratory activity is prone to errors. It is important to avoid these. We take precautions in the pre analytical steps to prevent errors in that aspect of lab processes. In the analytical phase, the best safe guard is the use of controls. Controls are materials with known values that can be checked before tests are done. It is the best mechanism for error detection in the analytical systems.
It is a mandated part of quality management system of a lab as per ISO. This has to be an ongoing process and should include monitoring of accuracy and precision. Only accurate, precise results are meaningful for diagnosis and prognosis. The two salient parts of Quality Assurance are Internal Quality Controls and External Quality Assurance.

The term IQC is Internal Quality Control. It is the measure of precision, or how well the measurement system reproduces the same result over time and under varying operating conditions. In addition, the IQCs can detect shifting accuracies.
Internal quality control material is usually run at the beginning of each shift, after an instrument is serviced, when reagent lots are changed, and whenever patient results seem inappropriate. The number of QC levels run and the number of runs per day and the location of the QC runs is as per the labs protocol and quality specifications of the labs as per each analyte’s performance.

Quality control processes vary, depending on whether the laboratory examinations use methods that produce quantitative, qualitative, or semi-quantitative results. Controlling should align to the test mechanism. These examinations differ in the following ways.
Quantitative examinations measure the quantity of an analyte present in the sample, and measurements need to be accurate and precise. The measurement produces a numeric value as an end-point, expressed in a particular unit of measurement. Such tests can use controls that produce numerical results and can be monitored through statistical processes to understand the stability of the analytical system.
Qualitative examinations are those that measure the presence or absence of a substance, or evaluate cellular characteristics such as morphology. The results are not expressed in numerical terms, but in qualitative terms such as “positive” or “negative”; “reactive” or “non-reactive”; “normal” or “abnormal”; and “growth” or “no growth”. Controls of such tests do not yield numerical and hence cannot be monitored through statistical processes.
Semi-quantitative examinations are similar to qualitative examinations, in that the results are not expressed in quantitative terms. The difference is that results of these tests are expressed as an estimate of how much of the measured substance is present. Results might be expressed in terms such as “trace amount”, “moderate amount”, or “1+, 2+, or 3+”. Controls of such tests do not yield numerical and hence cannot be monitored through statistical processes.

  1. The IQC should be cost effective.
  2. In the case of IQCs with numerical values;
           • Inter-lab comparison availability makes the IQC program more valid
           • Levels of QCs covering Medical Decision Points should be available
  3. Controls should have the same matrix as patient samples.
  4. Controls should have long expiry.
  5. The shelf life and open vial stability of the control should be good, with minimal vial to vial variability.

Lyophilized material is in a dry powder form.
Dehydration process is typically used to preserve a material or make the material more convenient for transport. Freeze-drying works by freezing the material and then reducing the surrounding pressure to allow the frozen water in the material to sublimate directly from the solid phase to the gas phase. Such lyophilized material require careful reconstitution before use

Matrix is the base from which control materials is prepared in addition to the preservatives added for stability. In normal serum the analytes are suspended in a protein matrix. This should be mimicked by the control material
Matrix effect – the influence of the control material’s matrix, other than the concentration of the analytes, on the measurement procedure that produce differing results, while still producing consistent results on patient samples, is called the matrix effect.

The normal distributions are a very important class of statistical distributions. All normal distributions are symmetric and have bell-shaped density curves with a single peak. Here, all the measures of central tendency –mean, median and mode – are at the same point. The normal distribution is also called the Gaussian distribution. Some details about normal distribution are as follows.

  • Normal distributions are symmetric around their mean.
  • The mean, median, and mode of a normal distribution are equal.
  • Normal distributions are defined by two parameters, the mean (μ) and the standard deviation (σ).
  • 68% of the area of a normal distribution is within one standard deviation of the mean.
  • Approximately 95% of the area of a normal distribution is within two standard deviations of the mean.
  • 99.7 % of the area of a normal distribution is within three standard deviation of the mean

Well preserved biological material on repeated examination will produce data points in a normal distribution. In a stable analytical system, the degree of dispersion and the position of the mean will be stable. If the system becomes unstable, either the degree of dispersion or the position of the mean will change. The unit of dispersion is the standard deviation. So by looking at the value of the Standard Deviation, and the position of the mean, one can say when instabilities occur in analytical system. Thus normal or Gaussian distribution forms the basis of Statistical QC monitoring, in those analytes that produce numerical values.

An LJ or Levy Jennings chart is a Gaussian on its side separated by a time interval. This enables the graphic representation of the quality control values so that developing imprecision or shifting accuracies can be easily picked up by the eye. Any deviation from the 66-95-99 rule points towards system instabilities and can be easily made out.

More than anything else, use your eyes. Shifting patterns become immediately evident and can be picked up by the eyes. However, some rules will enable you to understand changes in the system, even subtle ones. Westagard rules are very good for this purpose, which can be used as single rules or as a set of rules. Assigning which rule to apply for which analyte depends on the quality specifications of the lab and the performance of the anayte.

S.No Rule Violation Systemic error Random error
1 1:2s √ (beginning)
2 2:2s
3 2 of 32S
4 R4s
5 1:3S √ (beginning)
6 41S
7 6x,8x,9x,10x,12x √ (beginning)
8 7T √ (beginning)

As LJs are prepared to monitor analytical systems, it is important that the defining parameters of normal distribution, i.e. mean and SD are assigned correctly. These numbers should be derived when the analytical system is stable and assigned on the chart. Unless this is done adequately, one cannot monitor the same analytical system using the LJ.

To ensure the correctness of values assigned on an LJ chart, the correct mean and SD should be derived through parallel testing. While parallel testing is done, ensure that the analytical system is stable by the running/current QC lot. Since both old and new lots are done simultaneously, it is called the parallel testing.
For the new QC, at least 20 data points must be collected over a 10-20 day period. When collecting this data, be sure to include any procedural variation that occurs in the daily runs; for example, if different testing personnel normally do the analysis, all of them should collect part of the data. Once the data is collected, the laboratory will need to calculate the mean and standard deviation of the results and assign these for further monitoring of the system.
In case of QCs short expiry QCs, the mean may be gathered by doing 4-6 runs in 2 days. To this the CV% of the running/ current lot may be applied to derive the SD.

To a large extent lab errors can be quantified. The imprecision is easy to quantify using the unit of dispersion or Standard Deviation (SD). Applying an appropriate coverage factor on the SD can enable one to capture the Random Error (RE). Imprecision is considered as random errors as they occur in sudden, random unpredictable ways. Thus the unit of dispersion becomes unit of Random error. If you consider 1 SD as your random error, you are accounting for 66% of errors. If you take 2 SD, you will account for 95%. There are specifications of how much 1 SD can be for different analytes. Generally 1.65 SD is considered for computational purposes. Databases are available to compare your random error with how much it can be.

Inaccuracy is more difficult to quantify as one should know what the true/ target value is, to say how much we are away from the true value. If the true/ target value is known, we can understand how biased we are, whether it is a positive or a negative bias. This kind of error is called Systematic Errors as inaccuracies are caused by a longer term and systematic change in the analytical process. The challenge is how to conclude what the true/ target value is. The best source of target value is peer group mean. More about this can be read in the Labs for Life QC module.

  • Repeatability is a condition of measurement, out of a set of conditions that includes the same measurement procedure, same operators, same measuring system, same operating conditions and same location, and replicate measurements on the same or similar objects over a short period of time. Repeatability may be expressed in terms of multiples of the standard deviation. Within-run/ Intra-serial/Intra-run precision condition is synonyms.
  • Reproducibility is precision under reproducibility conditions, i.e. conditions where test results are obtained with the same method in different laboratories, by different operators, using different equipment, in different laboratories, in different locations, or on different days. Reproducibility may be expressed in terms of multiples of the standard deviation. Between Laboratories/ Inter Laboratory/Among Laboratories are synonyms
  • Intermediate Precision Is something between the 2 states, generally meaning with one lab, but with changes of reagent and calibrator lots, operators, operating conditions. All acceptable laboratory variables will be captured if at least 100 measurements are included. The Uncertainly of Measurement (MU) uses intermediate precision as the basis for its calculation

Total Error (TE) = Systematic Error (SE) + Random Error (RE)
Where SE = Inaccuracy (Accuracy is the closeness of a measurement to its target/ true value. SE or Bias = Lab Mean – Target Value).
RE = Computed Imprecision (Precision is the amount of variation in the measurements, a deviation away from an expected result, expressed in the unit of dispersion, the SD. SD is then multiplied by a coverage factor usually, 1.65)

Total Error enables the lab to understand where it stands in terms of quality. The lab can compare its error to the quality specifications available for each analyte. This specification is called Total Allowable Error or TEA
Total Allowable Error (TEA) is the amount of error that can be tolerated without invalidating the medical usefulness of the analytic result. A commonly used quality requirement is Total Allowable Error (TEA), which is derived from medically important analyte concentrations or clinical decision thresholds.
A hierarchy of quality requirements has been proposed

  1. Medical Requirements
  2. Biological Variation
  3. Proficiency testing guidelines
  4. Using Proficiency Testing results (past survey report)
  5. Tonk’s Rule
  6. Current Lab (Observed) CV *3

Critical Systematic Error or SEc is the number of SDs the mean can shift before exceeding the TEa. This gives an idea about how to monitor the analytical system for each analyte. It enables the lab to choose QC rules for each analyte.
SEc= {(TEa- Absolute Bias) / SD} – 1.65

Sigma metrics is related to SEc and in labs it is used to define the tolerance limits. 6-sigma implies “world class quality”. This is used in the lab for rule selection depending upon the analyte performance. Sigma scale gives an immediate idea about the DPMO (Defects per Million Occurances) of an anlayte to make decisions on.

  • Setting quality specifications for the lab.
  • Establishing written policies and procedures, including corrective actions.
  • Training all laboratory staff.
  • Monitoring of Internal controls, daily by the front line worker and periodically by the supervisory staff.
  • Appropriate corrective actions.
  • Assuring complete documentation.

The consensus based metrics used in the labs are: SDI and CVI SDI (Standard Deviation Index )-Peer-based measure of bias Describes how far our mean is from the peer or all lab methods’ mean Has direction, + or - . It is an indicator of accuracy.
CVI (Coefficient of Variation Index) Peer-based measure of imprecision Comparison of our laboratory's CV to the peer or all-lab CV It is also known as Coefficient of Variation Ratio (CVR). It is an indicator of precision.

The term EQA/ External Quality Assessment (EQA) is used to describe a method that allows for comparison of a laboratory's testing to a source outside the laboratory. This comparison can be made to the performance of a peer group of laboratories or to the performance of a reference laboratory. In the ISO 15189:2012 this is referred to as ILC or Inter Laboratory Comparison

ISO 15189, in Clause 5.5 mandates the need for evaluation or verification of methods both before it is used for patient reporting and also periodically, at defined intervals. Methods are generally validated by the manufacturer. However, the claims need to be verified before patient reporting is done by the method. The claims of precision, accuracy, linearity, biological reference ranges need to be verified by the lab. This is because, during transportation, the settings of equipment and analytical systems can change significantly and will require resetting.