Quality Assurance in Hematology and Hemostasis Testing



Quality Assurance in Hematology and Hemostasis Testing


George A. Fritsma




Case Study


After studying the material in this chapter, the reader should be able to respond to the following case study:


On an 8:00 am assay run, the results for three levels of a preserved hemoglobin control sample are 2 g/dL higher than the upper limit of the target interval. The medical laboratory scientist reviews δ-check data on the last 10 patient results and notices that the results are consistently 1.8 to 2.2 g/dL higher than results generated the previous day.



In clinical laboratory science, quality implies the ability to provide accurate, reproducible assay results that offer clinically useful information.1 Because physicians base 70% of their clinical decision making on laboratory results, the results must be reliable. Reliability requires vigilance and effort on the part of all laboratory staff members, and this effort is often directed by an experienced medical laboratory scientist who is a quality assurance and quality control specialist.


Of the terms quality control and quality assurance, quality assurance is the broader concept, encompassing preanalytical, analytical, and postanalytical variables (Box 5-1).2 Quality control processes document assay validity, accuracy, and precision, and should include external quality assessment, reference interval preparation and publication, and lot-to-lot validation.3



Preanalytical variables are addressed in Chapter 3, which discusses blood specimen collection, and in Chapter 45, which includes a section on coagulation specimen management. Postanalytical variables are discussed at the end of this chapter.


The control of analytical variables begins with laboratory assay validation.



Validation of a New or Modified Assay


All new laboratory assays and all assay modifications require validation.4 Validation is comprised of procedures to determine accuracy, specificity, precision, limits, and linearity.5 The results of these procedures are faithfully recorded and made available to on-site assessors upon request.6



Accuracy


Accuracy is the measure of concurrence or difference between an assay value and the theoretical “true value” of an analyte (Figure 5-1). Some statisticians prefer to define accuracy as the extent of error between the assay result and the true value. Accuracy is easy to define but difficult to establish and maintain.



For many analytes, laboratory scientists employ primary standards to standardize assays and establish accuracy. A primary standard is a material of known, fixed composition that is prepared in pure form, often by weighing on an analytical balance. The scientist dissolves the weighed standard in an aqueous solution, prepares suitable dilutions, calculates the anticipated concentration in each dilution, and assigns the calculated concentrations to assay outcomes. For example, the scientist may obtain pure glucose, weigh 100 mg, dilute it in 100 mL of buffer, and assay an aliquot of the solution using photometry. The resulting absorbance would then be assigned the value of 100 mg/dL. The scientist may repeat this procedure using a series of four additional glucose solutions at 20, 60, 120, and 160 mg/dL to produce a five-point “standard curve.” The curve may be re-assayed several times to generate means for each concentration. The assay is then employed on human plasma, with absorbance compared with the standard curve to generate a result. The matrix of a primary standard need not match the matrix of the patient specimen; the standard may be dissolved in an aqueous buffer, whereas the test specimen may be human serum or plasma.


To save time and resources, the scientist may employ a secondary standard, perhaps purchased, that the vendor has previously calibrated to a primary standard. The secondary standard may be a preserved plasma preparation delivered at a certified known concentration. The scientist merely thaws or reconstitutes the secondary standard and incorporates it into the test series during validation or revalidation. Manufacturers often match secondary standards as closely as possible to the test sample’s matrix, for instance, plasma to plasma, whole blood to whole blood. Neither primary nor secondary standards are assayed during routine patient sample testing, only during calibration.


Unfortunately, in hematology and hemostasis, in which the analytes are often cell suspensions or enzymes, there are just a handful of primary standards: cyanmethemoglobin, fibrinogen, factor VIII, protein C, antithrombin, and von Willebrand factor.7 For scores of analytes, the hematology and hemostasis laboratory scientist relies on calibrators. Calibrators for hematology may be preserved human blood cell suspensions, sometimes supplemented with microlatex particles or nucleated avian red blood cells (RBCs) as surrogates for hard-to-preserve human white blood cells (WBCs). In hemostasis, calibrators may be frozen or lyophilized plasma from healthy human donors. For most analytes it is impossible to prepare “weighed-in” standards; instead, calibrators are assayed using reference methods (“gold standards”) at selected independent expert laboratories. For instance, a vendor may prepare a 1000-L lot of preserved human blood cell suspension, assay for the desired analytes in house, and send aliquots to five laboratories that employ well-controlled reference instrumentation and methods. The vendor obtains blood count results from all five, averages the results, compares them to the in-house values, and publishes the averages as the reference calibrator values. The vendor then distributes sealed aliquots to customer laboratories with the calibrator values published in the accompanying package inserts. Vendors often market calibrators in sets of three or five, spanning the range of assay linearity or the range of potential clinical results.


As with secondary standards, vendors attempt to match their calibrators as closely as possible to the physical properties of the test sample. For instance, human preserved blood used to calibrate complete blood count analytes is prepared to closely match the matrix of fresh anticoagulated patient blood specimens, despite the need for preservatives, refrigeration, and sealed packaging. Vendors submit themselves to rigorous certification by governmental or voluntary standards agencies in an effort to verify and maintain the validity of their products.


The scientist assays the calibration material using the new or modified assay and compares results with the vendor’s published results. When new results parallel published results within a selected range, for example ±10%, the results are recorded and the assay is validated for accuracy. If they fail to match, the new assay is modified or a new reference interval and therapeutic range is prepared.


Medical laboratory scientists may employ locally collected fresh normal blood as a calibrator; however, the process for validation and certification is laborious, so few attempt it. The selected specimens are assayed using reference equipment and methods, calibration values are assigned, and the new or modified assay is calibrated from these values. The Student t-test is often the statistic employed to match the means of the reference and of the new assay. Often the reference equipment and methods are provided by a nearby laboratory.



Determination of Accuracy by Regression Analysis


If a series of five calibrators is used, results may be analyzed by the following regression equation:


y=a+bx


image

Slope(b)=[nXY(X)(Y)]/[nX2(X)2]


image

Intercept(a)=[Yb(X)]/n


image

where x and y are the variables; a = intercept between the regression line and the y-axis; b = slope of the regression line; n = number of values or elements; X = first score; Y = second score; ΣXY = sum of the product of first and second scores; ΣX = sum of first scores; ΣY = sum of second scores; ΣX2 = sum of squared first scores. Perfect correlation generates a slope of 1 and a y intercept of 0. Local policy establishes limits for slope and y intercept; for example, many laboratory directors reject a slope of less than 0.9 or an intercept of more than 10% above or below zero (Figure 5-2).



Slope measures proportional systematic error; the higher the analyte value, the greater the deviation from the line of identity. Proportional errors are caused by malfunctioning instrument components or a failure of some part of the testing process. The magnitude of the error increases with the concentration or activity of the analyte. An assay with proportional error may be invalid.


Intercept measures constant systematic error (or bias, in laboratory vernacular), a constant difference between the new and reference assay regardless of assay result magnitude. A laboratory director may choose to adopt a new assay with systematic error, but must modify the published reference interval.


Regression analysis gains sufficient power when 100 or more patient specimens are tested using both the new and reference assay in place of or in addition to calibrators. Data may be entered into a spreadsheet program that offers an automatic regression equation.



Precision


Unlike determination of accuracy, assessment of precision (dispersion, reproducibility, variation, random error) is a simple validation effort, because it merely requires performing a series of assays on a single sample or lot of reference material.8 Precision studies always assess both within-day and day-to-day variation about the mean and are usually performed on three to five calibration samples, although they may also be performed using a series of patient samples. To calculate within-day precision, the scientist assays a sample at least 20 consecutive times using one reagent batch and one instrument run. For day-to-day precision, at least 10 runs on 10 consecutive days are required. The day-to-day precision study employs the same sample source and instrument but separate aliquots. Day-to-day precision accounts for the effects of different operators, reagents, and environmental conditions such as temperature and barometric pressure.


The collected data from within-day and day-to-day sequences are reduced by formula to the mean and a measure of dispersion such as standard deviation or, most often, coefficient of variation in percent (CV%):


Mean(x¯)=χn;


image

where (Σx) = the sum of the data values and n = the number of data points collected


Standarddeviation(s)=(xx¯)2n1


image

CV%=100sx¯


image

The CV% documents the degree of random error generated by an assay, a function of assay stability.


CV% limits are established locally. For analytes based on primary standards, the within-run CV% limit may be 5% or less, and for hematology and hemostasis assays, 10% or less; however, the day-to-day run CV% limits may be as high as 30%, depending upon the complexity of the assay. Although accuracy, linearity, and analytical specificity are just as important, medical laboratory scientists often equate the quality of an assay with its CV%. The best assay, of course, is one that combines the lowest CV% with the greatest accuracy.


Precision for visual light microscopy leukocyte differential counts on stained blood films is immeasurably broad, particularly for low-frequency eosinophils and basophils.9 Most visual differential counts are performed by reviewing 100 to 200 leukocytes. Although impractical, it would take differential counts of 800 or more leukocytes to improve precision to measurable levels. Automated differential counts generated by profiling instruments, however, provide CV% levels of 5% or lower because these instruments count thousands of cells.



Linearity


Linearity is the ability to generate results proportional to the calculated concentration or activity of the analyte. The laboratory scientist dilutes a high-end calibrator or patient sample to produce at least five dilutions spanning the full range of assay. The dilutions are then assayed. Computed and assayed results for each dilution are paired and plotted on a linear graph, x scale and y scale, respectively. The line is inspected visually for nonlinearity at the highest and lowest dilutions (Figure 5-3). The acceptable range of linearity is established inboard based on the values at which linearity loss is evident. Although formulas exist for computing the limits of linearity, visual inspection is the accepted practice. Nonlinear graphs may be transformed using semilog or log-log graphs when necessary.



Patient samples with results above the linear range must be diluted and reassayed. Results from diluted samples that fall within the linear range are valid; however, they must be multiplied by the dilution. Laboratory scientists never report results that fall below or above the linear limits, because accuracy is compromised in the nonlinear regions of the assay. Lower limits are especially important when counting platelets or assaying coagulation factors. For example, the difference between 1% and 3% factor VIII activity affects treatment options and the potential for predicting coagulation factor inhibitor formation. Likewise, the difference between a platelet count of 10,000/mcL and 5000/mcL affects the decision to treat with platelet concentrate.





Levels of Laboratory Assay Approval


The U.S. Food and Drug Administration (FDA) categorizes assays as cleared, analyte-specific reagent (ASR) assays, research use only (RUO) assays, and home-brew assays. FDA-cleared assays are approved for the detection of specific analytes and should not be used for off-label applications. Details are given in Table 5-1.




Documentation and Reliability


Validation is recorded on standard forms. The Clinical Laboratory Standards Institute (CLSI) and David G. Rhoads Associates (http://www.dgrhoads.com/files1/EE5SampleReports.pdf) provide automated electronic forms. Validation records are stored in prominent laboratory locations and made available to laboratory assessors upon request.


Precision and accuracy maintained over time provide assay reliability. The recalibration interval may be once every 6 months or in accordance with operators’ manual recommendations. Recalibration is necessary whenever reagent lots are updated unless the laboratory can demonstrate that the reportable range is unchanged using lot-to-lot comparison. When control results demonstrate a shift or consistently fall outside action limits, or when an instrument is repaired, the validation procedure is repeated.


Regularly scheduled validity rechecks, lot-to-lot comparisons, instrument preventive maintenance, staff competence, and scheduled performance of internal quality control and external quality assessment procedures assure continued reliability and enhance the value of a laboratory assay to the patient and physician.



Lot-to-Lot Comparisons


Laboratory managers reach agreements with vendors to sequester kit and reagent lots, which ensures infrequent lot changes, optimistically no more than once a year. The new reagent lot must arrive approximately a month before the laboratory runs out of the old lot so that lot-to-lot comparisons may be completed and differences resolved, if necessary. The scientist uses control or patient samples and prepares a range of analyte dilutions, typically five, spanning the limits of linearity. If the reagent kits provide controls, these are also included, and all are assayed using the old and new reagent lots. Results are charted as illustrated in Table 5-2.



TABLE 5-2


Example of a Lot-to-Lot Comparison























Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 12, 2016 | Posted by in HEMATOLOGY | Comments Off on Quality Assurance in Hematology and Hemostasis Testing

Full access? Get Clinical Tree

Get Clinical Tree app for offline access
Sample Old Lot Value New Lot Value % Difference
Low 7 6 −14%
Low middle 12 12  
Middle 20.5