to Answering Critical CER Questions




© Springer International Publishing Switzerland 2015
Karl Y. Bilimoria, Christina A. Minami and David M. Mahvi (eds.)Comparative Effectiveness in Surgical OncologyCancer Treatment and Research16410.1007/978-3-319-12553-4_1


Approaches to Answering Critical CER Questions



Christine V. Kinnier1, Jeanette W. Chung2 and Karl Y. Bilimoria 


(1)
Department of Surgery, Massachusetts General Hospital, Boston, MA, USA

(2)
Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA

(3)
Surgical Outcomes and Quality Improvement Center, Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA

 



 

Karl Y. Bilimoria




Abstract

While randomized controlled trials (RCTs) are the gold standard for research, many research questions cannot be ethically and practically answered using an RCT. Comparative effectiveness research (CER) techniques are often better suited than RCTs to address the effects of an intervention under routine care conditions, an outcome otherwise known as effectiveness. CER research techniques covered in this section include: effectiveness-oriented experimental studies such as pragmatic trials and cluster randomized trials, treatment response heterogeneity, observational and database studies including adjustment techniques such as sensitivity analysis and propensity score analysis, systematic reviews and meta-analysis, decision analysis, and cost effectiveness analysis. Each section describes the technique and covers the strengths and weaknesses of the approach.


Keywords
Comparative effectiveness researchSurgical oncologyObservational and database studiesPragmatic trials



1 Introduction


Significant advances in evidence-based medicine have occurred over the past two decades, but segments of medical care are still practiced without underlying scientific evidence. Many practice patterns are so firmly established as the standard of care that a randomized controlled trial (RCT) would be unethical. Where evidence from RCTs exist, the study population is narrow and not easily applicable to most patients. Finally, multiple treatments are firmly engrained in clinical practice and have never been rigorously questioned.

Answering most of these knowledge gaps with an RCT, however, would be unethical, impractical, or dissimilar to routine care. In the last case, this is because an RCT is concerned with measuring efficacy: the effect of an intervention as compared to placebo when all other variables are held constant. In other words, an RCT creates a study environment in which outcome differences are most attributable to the intervention. While RCTs accurately identify treatment effects under ideal conditions, patients do not receive their routine care under those conditions. When it comes to routine patient care, physicians are less interested in efficacy than effectiveness: the effect of an intervention under routine care conditions. In order to study effectiveness, health care investigators have developed a toolbox of alternative techniques, known collectively as comparative effectiveness research (CER).

Both researchers and policy agencies have recognized the power of studying effectiveness. The 2009 government stimulus package allocated $1.1 billion to CER [1]. The following year, the Affordable Care Act proposed multiple health care reforms to improve the value of the United States health care system. These reforms included the foundation of the Patient-Centered Outcomes Research Institute (PCORI), a publically funded, non-governmental institute charged with conducting and funding CER projects [2]. As a result, funding for CER has grown substantially in the past 5 years.


2 Randomized Controlled Trials: Limitations and Alternatives


RCTs are the gold standard of medical research because they measure treatment efficacy, but they remain dissimilar to routine care and are impractical under many circumstances. First, RCTs are often prohibitively expensive and time consuming. One study reported that Phase III, National Institutes of Health-funded RCTs cost an average of $12 million per trial [3]. RCTs also take years to organize, run, and publish. Consequently, results may be outdated by publication. Second, some events or complications are so infrequent that enrolling a sufficiently large study population would be impractical. Third, clinical experts working from high-volume hospitals follow RCT participants closely in order to improve study follow-up and treatment adherence. Following study conclusion, however, routine patients receiving routine care may not achieve the same level of treatment adherence. Therefore study outcomes and routine care outcomes for the same intervention may differ substantially. Fourth, and perhaps most importantly, RCTs are restricted to a narrow patient population and a limited number of study interventions and outcomes. These necessary restrictions also restrict the broad applicability of RCT results.

Clearly investigators cannot rely solely on RCTs to address the unanswered questions in surgical oncology. CER techniques offer multiple alternatives. We will introduce these approaches here and then describe each technique in more detail throughout the series.


3 Experimental Studies


The term “clinical trials” evokes images of blinded, randomized patients receiving treatment from blinded professionals in a highly specialized setting. These RCTs are highly sensitive to the efficacy of the intervention under investigation. In other words, an intervention is most likely to demonstrate benefit in a setting where patients have few confounding medical diagnoses, every dose or interaction is monitored, and patients are followed closely over the study period. Unfortunately, RCTs are often prohibitively expensive and require many years to plan and complete. Furthermore, a medication that is efficacious during a highly-monitored RCT may prove ineffective during routine care where patients more frequently self-discontinue treatment due to unpleasant side effects or inconvenience. Clinical trials performed with the comparative effectiveness mindset aim to address some of these limitations.


3.1 Pragmatic Trials


Due to the constraints of an RCT, results may not be valid outside the trial framework. Pragmatic trials attempt to address this limition in external validity by testing an intervention under routine conditions in a heterogeneous patient population. These routine conditions may include a broad range of adjustments. First, pragmatic trials may have broad inclusion criteria; ideally trial patients are only excluded if they are not intervention candidates outside of the study. Second, the intervention may be compared to routine practice, and clinicians and patients may not be blinded. This approach accepts that placebo effect may augment intervention outcomes when used in routine practice. Third, pragmatic trials may use routine clinic staff rather than topic experts, and staff may be encouraged to adjust the medication or intervention as they would in routine practice. Fourth, patient-reported outcomes may be measured in addition to—or instead of—traditional outcomes. Finally, patients are usually analyzed according to their assigned intervention arm; this is also known as intention-to-treat analysis. Pragmatic trials may range anywhere along this spectrum: on one end, an otherwise traditional RCT may use an intention-to-treat analysis; on the other, investigators may aim to conduct the study under completely routine circumstances with the exception of intervention randomization. The investigators must determine what level of pragmatism is appropriate for their particular research question.

Pragmatic trials help determine medication or intervention effectiveness in a more realistic clinical setting. The adjustments that make pragmatic trials more realistic, however, also create limitations. Pragmatic studies are conducted under routine clinical circumstances, so an intervention that is effective in a large, well-funded private clinic may not be equally effective in a safety-net clinic. Therefore, clinicians must consider the study setting before instituting similar changes in their own practice. In addition, pragmatic trials include a broad range of eligible patients and consequently contain significant patient heterogeneity. This heterogeneity may dilute the treatment effect and necessitate large sample sizes and extended follow-up periods to achieve adequate statistical power. This may then inflate study cost and counterbalance any money saved by conducting the trial in a routine clinic with routine staff.


3.2 Cluster Randomized Trials


Cluster randomized trials (CRTs) are defined by the randomization of patients by group or practice site rather than by individual. Beyond group randomization, CRTs may use either traditional RCT techniques or pragmatic trial techniques. Group level randomization has multiple effects. Group contamination is uncommon since participants are less likely to know study participants from other sites. This makes CRTs ideally suited for interventions that are organizational, policy-based, or provider-directed. With these education or resource-based interventions, well-intentioned participants or physicians may distribute information to control-arm participants without the investigator’s knowledge. Risks of cross contamination are significantly lowered when participants from different study arms are separated by site and less likely to know one another. Group randomization also better simulates real-world practice since a single practice usually follows the same treatment protocol for most of its patients. Finally, physicians and clinic staff can be educated on the site-specific intervention and then care for patients under relatively routine circumstances. In some circumstances this may help to coordinate blinding, minimize paperwork, and reduce infrastructure and personnel demands.

Clustered patient randomization, however, introduces analytical barriers. Patients often choose clinics for a specific reason, so patient populations may differ more among clinics than within clinics. Furthermore, differences may not be easily measurable (e.g., patients may differ significantly in how much education they expect and receive prior to starting a new medication), so adjusting for these differences may be difficult during analysis. Consequently, cluster randomization requires hierarchical modeling to account for similarities within groups and differences between groups, but hierarchical modeling produces wider confidence intervals. As with pragmatic trials, this may require increases in subject number and follow-up time in order to detect clinically significant outcome differences. Unfortunately, individual participant enrollment is usually limited within each cluster, so increasing a trial’s statistical power usually requires enrollment of additional clusters.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 1, 2016 | Posted by in ONCOLOGY | Comments Off on to Answering Critical CER Questions

Full access? Get Clinical Tree

Get Clinical Tree app for offline access