Coagulation Instrumentation



Coagulation Instrumentation*


David L. McGlasson




Case Study


After studying the material in this chapter, the reader should be able to respond to the following case study:


A 35-year-old white man was admitted to the hospital with abdominal pain and tenderness, malaise, and a low-grade fever. A tentative diagnosis of cholecystitis was made, with possible surgical intervention considered. Pertinent medical history included tonsillectomy at age 6 and appendectomy at age 18 with no abnormal bleeding symptoms noted. The patient reported that he was taking no medications at this time. In anticipation of surgery, routine coagulation studies were ordered. When the specimen arrived in the laboratory, it was centrifuged, and it was noted that the plasma had a whitish, milky appearance. The specimen was processed by an automated analyzer using photo-optical end-point detection methodology, and the following results were obtained:


























Test Results Reference Interval Flags
Prothrombin time 16.7 sec 10.9-13.0 sec Lipemia
Partial thromboplastin time >150 sec 30.6-35.0 sec Lipemia
Fibrinogen 245 mg/dL 190-410 mg/dL Lipemia


image


Because the laboratory’s policy is to retest after all abnormal coagulation results, the prothrombin time and partial thromboplastin time assays were repeated, and similar values were obtained.



The coagulation laboratory is an ever-changing environment populated by automated analyzers that offer advances in both volume and variety of tests.1 Hardware and software innovations provide for random access testing with multitest profiles. In the past, the routine coagulation test menu consisted of prothrombin time (PT) with the international normalized ratio (INR), partial thromboplastin time (PTT, activated partial thromboplastin time), fibrinogen, thrombin time, and D-dimer assays. More specialized testing was performed in tertiary care institutions or reference laboratories employing medical laboratory scientists with specialized training. With the introduction of new instrumentation and test methodologies, coagulation testing capabilities have expanded significantly, so that many formerly “specialized” tests can be performed easily by general medical laboratory staff. New instrumentation has made coagulation testing more standardized, consistent, and cost effective. Automation has not advanced, however, to the point of making coagulation testing foolproof or an exact science. Operators must develop expertise in correlating critical test results with the patient’s diagnosis or condition when monitoring antithrombotic therapy. Good method validation of procedures, cognitive ability, and theoretical understanding of the hemostatic mechanism are still required to ensure the accuracy and validity of test results so that the physician can make an informed decision about patient care.



Historical Perspective


Visual clot-based testing began in the eighteenth century. Laboratorians worked with animals and human blood, observing how long it took blood to clot after collection. With the advent of the microscope in the 1800s scientists observed blood to look for increasing turbidity and visible clot formation.


Many advancements took place from 1822 to 1921. These included controlling the temperature during the formation of a clot, passing objects through the blood to detect resistance, and using glass capillary tubes to view clot formation. In the early 1900s, researchers monitored the length of time it took whole blood to clot in glass tubes while they were being tilted, a precursor to the Lee-White tilt tube procedure of subsequent years.


The “coaguloviscosimeter,” a primitive whole-blood clot detection device, was developed in 1910. The change in viscosity as blood clotted generated a voltage change that was recorded by a direct readout system. Voltage changes could be plotted against time to measure clot formation.


Except for point-of-care testing and platelet aggregometry, citrated plasma (usually platelet-poor plasma, plasma with a platelet count of less than 10,000/mcL) has now replaced whole blood in coagulation instruments, although the principle of interval to clot formation lives on.1,2


Plasma coagulation testing traces its roots to Gram, who in 1920 added calcium chloride to anticoagulated plasma at 37° C. He measured the increasing viscosity of the blood during fibrin monomer polymerization, a principle used today in thromboelastography (TEG) and sonar clot detection, laying the groundwork for the most commonly ordered coagulation tests in current use, the PT and PTT.3 Subsequent twentieth-century developments include manual loops, electromechanical clot detection using a movable lead (BBL Fibrometer) or rolling steel ball (Diagnostica Stago ST-4), and photo-optical clot detection (Coag-A-Mate, originally manufactured by General Diagnostics, Chapel Hill, N.C.). Nephelometers, first developed in 1920, measure 90-degree light dispersion in a colloidal suspension. As plasma clots, a change in light scatter can be measured over time, and this principle is in common use today.


The 1950s saw the development of the BBL Fibrometer, an instrument that can still be found in coagulation laboratories, although it is no longer being manufactured. This instrument employed an electromechanical clot detection methodology that allowed laboratories to transition from the manual tilt tube or wire loop method to a more accurate semiautomated testing process.


Current coagulation instruments apply many of the clot detection principles of these early analytical systems. They either observe for the clot formation (optical, nephelometric devices) or they detect the clot by “feel” (mechanical, viscosity-based devices). Although the principles remain the same, the instruments have been enhanced to eliminate variations in pipetting and end-point detection. They also allow multiple analyses to be performed simultaneously on a single specimen.4



Clot End-Point Detection Principles


Coagulometers are automated or semiautomated. Semiautomated coagulometers require the operator to deliver test plasma and reagents manually to the reaction cuvette and limit testing to one or two specimens at a time. They are relatively inexpensive, but their use requires considerable operator expertise.


Fully automated analyzers provide pipetting systems that automatically transfer reagents and test plasma to reaction vessels and measure the end point without operator intervention (Table 47-1). Multiple specimens can be tested simultaneously. Automated coagulometers are expensive, and laboratory staff require specialized training to operate and maintain them. Regardless of technology, all semiautomated and automated analyzers offer better coagulation testing accuracy and precision than visual methods.



Instrument methodologies are classified into five groups based on the end-point detection principle:



Historically, coagulation instruments were limited to a single type of end-point detection system each, mechanical or photo-optical. Photo-optical instruments operated at a fixed wavelength between 500 nm and 600 nm and could be used only for clot detection. Later instruments could also read at 405 nm to perform chromogenic assays. Although this change made it possible to automate advanced coagulation protocols, it required laboratories to purchase and train staff on multiple analyzers if they wanted to offer the wider range of coagulation testing capabilities. Since 1990, instrument manufacturers have successfully incorporated multiple detection methods into single analyzers, which allows a laboratory to purchase and train on only one instrument while still providing specialized testing capabilities implemented by multiple operators.4



Electromechanical End-Point Detection


In electromechanical clot detection systems such as the system employed by BBL’s time-honored Fibrometer, fibrin strands attach to a moving mechanical electrode (probe), completing an electrical circuit and stopping the interval timer. There is one stationary and one moving probe. During clotting, the moving probe enters and leaves the plasma at regular intervals. The current between the probes is broken as the moving probe leaves the plasma. When a clot forms, the fibrin strand conducts current between the probes even when the moving probe exits the solution. The current completes a circuit and stops the timer.


Another mechanical clot detection method employs a magnetic sensor that monitors the movement of a steel ball within the test plasma. Two moving ball principles are used. In the first, an electromagnetic field detects the oscillation of a steel ball within the plasma-reagent solution. As fibrin strands form, the viscosity starts to increase, slowing the movement (Figure 47-1). When the oscillation decreases to a predefined rate, the timer stops, indicating the clotting time of the plasma. This methodology is found on all Diagnostica Stago analyzers.



In the second variation, a steel ball is positioned in an inclined well. The position of the ball is detected by a magnetic sensor. As the well rotates, the ball remains positioned on the incline. When fibrin forms, the ball is swept out of position. As it moves away from the sensor, there is a break in the circuit, which stops the timer. This technology can be found on AMAX and Destiny instruments distributed by Tcoag US, a division of Diagnostica Stago located in Parsippany, N.J. Mechanical methods are unaffected by plasma icterus or lipemia.



Photo-optical End-Point Detection


Photo-optical (turbidometric) coagulometers detect a change in plasma optical density (OD, transmittance) during clotting. Light of a specified wavelength passes through plasma, and its intensity (OD) is recorded by a photodetector. The OD depends on the color and clarity of the sample and is established as the baseline. Formation of fibrin strands causes light to scatter, allowing less to fall on the photodetector and thus generating an increase in OD. When the OD rises to a predetermined variance from baseline, the timer stops, indicating clot formation. Because the baseline OD is subtracted from final OD, effects of lipemia and icterus are minimized. Many optical systems employ multiple wavelengths that discriminate and filter out the effects of icterus and lipemia. Many automated and semiautomated coagulation instruments developed since 1970 use photo-optical clot detection (Figure 47-2).




Nephelometric End-Point Detection


Nephelometry is a modification of photo-optical end-point detection in which 90-degree or forward-angle scatter, rather than OD, is measured. A light-emitting diode produces incident light at approximately 600 nm and a photodetector detects variations in light scatter at 90 degrees (side scatter) and at near 180 degrees (forward-angle scatter). As fibrin polymers form, side scatter and forward-angle scatter rise (Figure 47-3).4,5 The timer stops when scatter reaches a predetermined intensity, and the interval is recorded.



Nephelometry can be adapted to dynamic clot measurement. Continuous readings are taken throughout clotting, measuring the entire clotting sequence to completion and producing a clot curve or “signature.”


Nephelometry was first applied to immunoassay. As antigen-antibody complexes (immune complexes) precipitate, the resulting turbidity scatters incident light.6 In reactions in which the immune complexes are known to be too small for detection, the antibodies are first attached to microlatex particles. In coagulation, coagulation factor immunoassays employ specific factor antibodies bound to latex particles. Nephelometry provides a quantitative, but not functional, assay of coagulation factors. Nephelometry is often employed in complex automated coagulometers because it allows both clot-based assay and immunoassay. Nephelometry-style analyzers can be used to produce high-volume multiple-assay coagulation profiles.



Chromogenic End-Point Detection


Chromogenic (synthetic, amidolytic) methodology employs a colorless synthetic oligopeptide substrate conjugated to a chromophore, usually para-nitroaniline (pNA). Chromogenic analysis is a means for measuring specific coagulation factor activity, because it exploits the factors’ enzymatic (protease) properties. The oligopeptide is a series of amino acids whose sequence matches the natural substrate of the protease being measured.7 Protease cleaves the chromogenic substrate at the site binding the oligopeptide to the pNA, freeing the pNA. Free pNA is yellow; the OD of the solution is proportional to protease activity and is measured by a photodetector at 405 nm. In some instances a fluorescent conjugate is used; this method is called fluorogenic.


The activity of coagulation factors and many other enzymes is measured by the chromogenic method directly or indirectly:



• Direct chromogenic measurement: OD is proportional to the activity of the substance being measured, for instance, coagulation control protein C activity.


• Indirect chromogenic measurement: The protein or analyte being measured inhibits a reagent enzyme that has activity directed toward the synthetic substrate. The change in OD is inversely proportional to the concentration or activity of the substance being measured; for example, heparin in the anti–factor Xa assay. The principle used in instruments marketed by the DiaPharma Group, Inc. (West Chester, Ohio) to perform chromogenic factor X assay is shown in Figure 47-4.




Immunologic Light Absorbance End-Point Detection


Immunologic assays are the newest assays available in coagulation laboratories and are based on antigen-antibody reactions similar to those used in nephelometry as described previously. Latex microparticles are coated with antibodies directed against the selected analyte (antigen). Monochromatic light passes through the suspension of latex microparticles. When the wavelength is greater than the diameter of the particles, only a small amount of light is absorbed.8 When the coated latex microparticles come into contact with their antigen, however, the antigen attaches to the antibody and forms “bridges,” which causes the particles to agglutinate. As the diameter of the agglutinates increases to the wavelength of the monochromatic light beam, light is absorbed. The increase in light absorbance is proportional to the size of the agglutinates, which in turn is proportional to the antigen level.


Immunoassay technology became available on coagulometers in the 1990s and is used to measure a growing number of coagulation factors and proteins, expanding the diagnostic capabilities of the laboratory. Coagulation protein testing, which used to take hours or days to perform using traditional antigen-antibody detection methodologies such as enzyme-linked immunosorbent assay or electrophoresis, now can be done in minutes on an automated analyzer.


The introduction of new coagulation methodologies has improved testing capabilities in the coagulation laboratory. Refinement of these methodologies has allowed the use of synthetic substrates and measurements of single proenzymes, enzymes, and monoclonal antibodies, which increases the ability to recognize the causes of disorders of hemostasis and thrombosis.8



Advances in Coagulometry


Significant advances have been made in the capability and flexibility of coagulation instrumentation. Instruments previously required manual pipetting, recording, and calculation of results, which necessitated significant operator expertise, intervention, and time. Current technology allows a “walkaway” environment in which, after specimens and reagents are loaded and the testing sequence is initiated, the operator can move on to perform other tasks.

Stay updated, free articles. Join our Telegram channel

Jun 12, 2016 | Posted by in HEMATOLOGY | Comments Off on Coagulation Instrumentation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access