Surgical Site Infections



Surgical Site Infections


Teena Chopra

Deverick J. Anderson

Keith S. Kaye



HISTORICAL BACKGROUND

The role of the surgeon’s hands in introducing bacteria into wounds was slow to be recognized despite the work of Semmelweis in 1847. Although rubber gloves were first developed for the use of Halsted’s scrub nurse in 1889 to protect her hands from harsh antiseptics, widespread use of rubber gloves in surgical procedures did not become established until well into the 20th century.

In the 20th century, the standardization of aseptic practices in the operating room greatly improved the safety of clean operative procedures, but operations involving anatomic structures with dense endogenous flora that cannot be eliminated before the operation, such as of the colon or rectum, continued to carry a very high risk of infection. A major collaborative study organized by the National Research Council (NRC) in 1964 documented the rate of surgical site infection (SSI) following 15,613 operations carried out over 27 months from 1959 to 1962 in 16 operating rooms of five university hospitals (1).

The NRC study was one of the earliest and certainly one of the most convincing to document the importance of endogenous bacteria as the primary etiologic agent of SSIs. This report also introduced a system for classifying wounds according to the risk of endogenous contamination (and thus of postoperative wound infection), which provided a basis for comparing SSI statistics and was a harbinger to the currently used metric of “wound class.” Although more sensitive and specific wound classification systems employing additional risk factors for wound infection have been developed since the NRC study (2,3), all systems continue to incorporate elements of this original scheme. The NRC report contained results from the largest and most carefully conducted study in its day to examine a host of other factors related to the patient and the environment that influenced the risk of postoperative wound infections. Multivariate analysis of this large body of data provided convincing evidence of changes in the risk of developing postoperative infections influenced by the patient’s age, obesity, steroid administration, malnutrition, presence of remote infection, use of drains, duration of the operation, and duration of preoperative hospitalization.

Although antibiotics were introduced near the end of World War II, their effective use for preventing postoperative infection ultimately was made possible by the pioneering studies of Dr. John Burke, who used an animal model to demonstrate the critical importance of the timing of prophylactic antibiotic administration (4). He showed via a guinea pig model that the appropriate antibiotics given before bacterial contamination could significantly reduce the risk of infection, whereas the same antibiotic given after bacterial contamination was much less effective. This information was translated into trials demonstrating clinically and statistically significant effects in human patients undergoing scheduled operative procedures, first by Bernard and Cole (5) and then by Polk and Lopez-Mayor (6) in the 1960s. Work on prophylactic antibiotics since that time has focused on defining those procedures and circumstances most likely to benefit from the use of prophylactic antibiotics and on examining the relative efficacy of different drugs and different routes and regimens of administration.

As improvements in anesthetic care and understanding of surgical physiology permitted more aggressive and widespread surgical interventions during the second half of the 20th century, the importance of surveillance for infectious complications became more evident. In the 1970s, the Centers for Disease Control and Prevention (CDC) began the National Nosocomial Infections Surveillance (NNIS) system (7). Although it included all healthcare-associated infections (HAIs), one component emphasized from the beginning was the collection of data on postoperative infections. Data from the NNIS system provide a rich source of information about the relative occurrence of infections at all sites in hospitalized surgical patients (8). Also, in the 1970s, surgical groups’ reports of surveillance of large numbers of procedures validated the relationship between wound class and different risks of infection as well as the beneficial effect of reporting SSI rate data to the operating surgeons on reducing the incidence of SSI (9).

Recently, SSI has taken on an increasingly visible role as a potentially preventable, publicly reported condition. Occurrences of SSI and measurements of compliance with processes to prevent SSI can now impact a hospital’s accreditation by organizations such as The Joint Commission (10) and payments from the Center for Medicare and Medicaid Services (CMS) and insurance carriers (11).


THE IMPACT OF SURGICAL INFECTIONS

Postoperative infections in surgical patients can prolong the length of hospitalization for substantial periods, depending on the type of operation (estimated at 1 million additional inpatient-days in a recent study, and thereby incurring $1.6 billion in excess costs) (12). Cardiothoracic, orthopedic, and gastrointestinal operations are especially costly in this regard as the result of both pulmonary and operative site infections (12). In addition to the higher direct costs of care, indirect costs should be considered in calculating the consequences of postoperative infection. These costs include the time the patient loses from gainful employment and the possible medicolegal actions that the patient could take against a hospital or the surgical staff (see Chapter 17).





SURGICAL INFECTION SURVEILLANCE AND CLASSIFICATION OF SURGICAL WOUNDS

As indicated earlier, the oldest and best-established definitions of surgical wound classes originated with the NRC study of the efficacy of ultraviolet light for reducing wound infections. That study placed all wounds into one of five classes (2):



  • Refined-clean: Clean elective operations, not drained, and primarily closed.


  • Other (clean): Operations that encountered no inflammation and experienced no lapse in technique. In addition, there was no entry into the gastrointestinal or respiratory tract except for incidental appendectomy or transection of the cystic duct in the absence of signs of inflammation. Entrance into the genitourinary tract or biliary tract was considered clean if the urine and/or bile were sterile.


  • Clean-contaminated: Gastrointestinal tract or respiratory tract entered without significant spill. Minor lapse in technique. Entry into the genitourinary tract or biliary tract in the presence of infected urine or bile.


  • Contaminated: Major lapse in technique (such as emergency open cardiac massage), acute bacterial inflammation without pus, spillage from the gastrointestinal tract, or fresh traumatic wound from relatively clean source.


  • Dirty: Presence of pus, perforated viscus, or traumatic wound that is old or from a dirty source.

Subsequent reports have condensed this system into four groups, combining refined-clean and other (clean) into the one category of clean. Although the risk for SSI generally increased as procedures move from clean, to clean-contaminated, to contaminated and infected, subsequent reports indicate a general consistency of the trends toward decreased overall SSI rates over time that is most marked in the contaminated and dirty classes of wounds (Table 36.1) (9,16). These rates could have been influenced by a variety of factors, including a better understanding of the effective use of prophylactic antibiotics and of the bacteriology of dirty operative procedures and a reduction in the practice of closing the skin in dirty procedures.

Since the NRC study, much effort has focused on understanding which factors other than wound class affect the SSI risk. This trend began with the original analysis of additional risk factors performed with the NRC study. The earliest efforts to control SSI focused on lowering infection rates for clean wounds, because these wounds should theoretically have a zero SSI rate if all bacteria could be eliminated from the wound. Thus, efforts focused on aseptic technique for the prevention of SSI. Subsequent work found that even clean wounds become contaminated with some bacteria, and evaluation of historical data discovered potential interventions for reducing SSI rates even in high-risk wounds. This provided an incentive to understand the underlying SSI risk in order to sensibly compare inter- or intrafacility SSI rates.

The CDC developed a simplified risk index on the basis of analyses of NNIS SSI data that includes three components: the physical status index of the American Society of Anesthesiology (ASA) (14), surgical duration, and wound class (3). The ASA index assigns one point for a preoperative assessment score of 3, 4, or 5. In addition, a cut point was developed using the 75th percentile for operative duration for most operative procedures. A point is assigned for operative duration >75th percentile. The wound classification of contaminated or dirty adds one point to the risk score. Thus, the NNIS SSI risk index has a possible range from 0 to 3.

A comparison of the predictive accuracy of the NNIS SSI risk index with the old NRC classification shows that this simpler index retains the increased accuracy and consistency while being easier to apply (Table 36.2). The ratio of risks within single NRC wound classes range between 3.9 and 5.4, whereas all risk ratios within single NNIS risk strata fall between 1.0 and 2.1. For surveillance programs with limited resources, surveillance of the 53% of patients with ≥1 SSI risk factors would yield data on 75% of all SSIs, thus increasing the efficiency of surveillance efforts (4).

Despite its advantages over the NRC wound classification system with regard to estimating SSI risk, the NNIS index, as with all indexes, is not effective in predicting outcomes for individual patients. In addition, the NNIS index lacks predictive power for certain highly standardized procedures, such as coronary artery bypass grafting (CABG) (17), Cesarean section (16), and craniotomy (18), where the majority of patients have the same or similar NNIS index score. Although the NNIS index can accurately distinguish the risk of procedures from different categories of operative procedures, it does a poor job of distinguishing higher- and lower-risk procedures among all patients undergoing the same procedure. In these instances,
different risk factors specific to the procedure and to the population become more important. Another potential problem with the NNIS system is inconsistency in the assignment of ASA scores (19). A comparison of the sensitivity and specificity of ASA scores compared with the presence of ≥3 discharge diagnoses would be of interest. This comparison could probably be carried out on the original data sets used in the studies by Haley et al. (2) and Culver et al. (3).








TABLE 36.2 Comparison of Centers for Disease Control and Prevention’s (CDC) National Nosocomial Infection Surveillance (NNIS) System and National Research Council (NRC) Risk Predictions for Surgical Wound Infection

































































NNIS Risk Index


NRC Class


0


1


2


3


All


Maximum Ratioa


Clean


1.0


2.3


5.4



2.1


5.4


Clean-contaminated


2.1


4.0


9.5



3.3


4.5


Contaminated



3.4


6.8


13.2


6.4


3.9


Dirty



3.1


8.1


12.8


7.1


4.1


All


1.5


2.9


6.8


13.0


2.8



Maximum ratioa


2.1


1.7


1.8


1.0




a Ratio of the lowest to the highest infection rate in wound class or in risk index.


The CDC recently introduced the use of the Standardized Infection Ratio (SIR) for surveillance of several HAIs including SSIs. The SSI SIR is more specific and is a result of logistic regression modeling of all procedure-level data collected by the CDC’s National Healthcare Safety Network (NHSN; the updated version of NNIS) facilities as compared to the traditional NNIS risk index. The SSI SIR will provide improved risk adjustment and will replace the current risk-stratified SSI rates (20). The SSI SIR, however, does not account for many patient-specific factors, such as obesity, diabetes, smoking, or redo procedures, and compares the SSI rate to the past SSI rate. Other organizations, including the National Surgical Quality Improvement Program (NSQP) and The Society of Thoracic Surgeons (STS), use different types of risk adjustment models that include additional variables (20,21).


HOST FACTORS THAT INFLUENCE INFECTION RISK

Many individual host factors influence SSI risk. In most instances, the precise mechanism of action that links the risk factor and the infectious outcome are not known, although plausible explanations have often been provided based on logical reasoning. Thus, the increased SSI rates observed with advanced age, morbid obesity, weight loss, hypoalbuminemia, impaired functional status, immunosuppression, and diabetes mellitus have been attributed, in part, to nonspecific defects in host defenses. The increased SSI risk observed in patients with anergy is not easily related to other measurable immune functions. It is well established, however, that cigarette smoking decreases wound healing by causing vasoconstriction and decreased tissue oxygenation (22).

Patients who have an active infection at another body site are at increased risk of postoperative SSI (1). This finding could relate to the increased risk that significant numbers of bacteria will gain access to the wound during the procedure or to inoculation of the surgical wound by bacteremia (23). Data from human wounds suggest that the risk of postoperative infection is very great whenever the wound inoculum is >105 bacteria (24). Although lymphatics have been suspected as a route of infection in patients with distal infections, evidence is lacking (25). Patients who have been shaved at the surgical site before the time of operation theoretically have a higher risk of infection because of abrasions caused by the razor and subsequent bacterial proliferation and inflammation in those injuries (26). In vascular surgery, similar operations have a higher risk of postoperative infection in the groin region than in the arm or neck (27). This could stem from local vascularity, local differences in bacterial number and type, or both.


INTRAOPERATIVE EVENTS THAT INFLUENCE INFECTION RISK, AND METHODS FOR PREVENTION


DURATION OF OPERATION (28,29)

One of the most consistently reported factors in SSI is the duration of the operative procedure. The precise connection between duration and SSI risk is not known. It is plausible that a prolonged operation results in more desiccation of tissues, potential for hypothermia of the patient, and increased exposure of the wound to bacteria. It is also possible, however, that a longer operative duration is a marker for other, unmeasured factors, such as the underlying difficulty of the procedure, more scarring, larger tumor, patient obesity, difficulty in exposure (30), or the skill or experience of the surgeon. An operation that is rushed could heighten the risk of intraoperative contamination or of imperfect hemostasis with subsequent increased SSI risk. Operations should not be prolonged unnecessarily, but emphasis on the speed of operation can be misleading.


TRANSFUSION AND FLUID MANAGEMENT

Repeated blood transfusions can lead to increased risk of infections by altering the body’s immune response, especially macrophage functions. There is dose-dependent correlation between blood product transfusion and increased mortality and infections in trauma patients. Additionally, crystalloids have shown to reduce tissue oxygen supply and hence should be avoided (30).



HYPERGLYCEMIA

High blood sugar (≥140 mg/dL), irrespective of having diabetes, increases the risk of SSI (31). However, a very aggressive approach can cause hypoglycemia, and hence monitoring serum glucose during the perioperative period is essential. Maintaining serum glucose <200 mg/dL has been demonstrated to reduce SSI following some procedures, including in adult but not pediatric cardiothoracic surgery. Ata et al. showed that basal-bolus insulin regimen is preferable over sliding scale insulin as it reduces SSIs and provides good glycemic control in adult general surgery patients with Type 2 diabetes (32).


DELAYED PRIMARY CLOSURE

Delayed primary closure is recommended in patients who have highly contaminated wounds, as this approach leads to improved blood flow at the wound edges and hence better delivery of functional phagocytes, resulting in increased defense against infections, especially through the first 5 to 6 postoperative days (33).


INTRAOPERATIVE HYPOTHERMIA

The risk of SSI may also be decreased by maintaining intraoperative normothermia (34), particularly in colorectal surgery. Intraoperative hypothermia impairs various aspects of the immune system because of generalized vasoconstriction leading to decreased subcutaneous blood flow and low O2 tension and delay in wound healing (30).

Jun 16, 2016 | Posted by in INFECTIOUS DISEASE | Comments Off on Surgical Site Infections

Full access? Get Clinical Tree

Get Clinical Tree app for offline access