Evidence-based care in the community

Chapter 3 EVIDENCE-BASED CARE IN THE COMMUNITY



INTRODUCTION


The community mental health worker caring for older people will develop over time a certain pattern to their clinical work. It is likely that this pattern will have been influenced by their general professional education and training (e.g. as a nurse or social worker), by their previous experience in the workplace (e.g. in an adult mental health service) and by their exposure to the older persons’ mental health services (OPMHS) team in which they currently work. They are likely to have had professional supervision as well as direction from line managers. In addition, their OPMHS team may have clinical pathways or protocols that guide the clinical approach to older people with particular problems. Some mental health workers might even have encountered pharmaceutical company representatives seeking to promote their products. Finally, the worker is likely also to bring their patterns of behaviour to their work. These patterns have been informed by their personal experiences with the healthcare system, and those of their family and friends. So the knowledge, skills and attitudes brought by the worker to the clinical situation are likely to have been moulded over time by a variety of influences. As only some of these influences are likely to be reliable and valid sources of evidence, it is worth examining more formally the types of evidence that might inform clinical behaviour.


This chapter provides a detailed outline of the evidence-based approach to healthcare. However, much of standard care in older persons’ mental health is still based on historical practice and humane principles, rather than evidence. In the absence of satisfactory evidence, a scientific approach should be taken to practice by the individual mental health worker and by the mental health team of which they are a member. Standard practices that are not yet based on evidence should be subject to critical scrutiny by the health worker.



QUANTITATIVE VERSUS QUALITATIVE METHODS


Before considering the scientific method and the types of empirical evidence that underpin evidence-based clinical practice, it is worth outlining the differences between quantitative and qualitative methods. Quantitative methods involve the collection of observations using numbers. For example, change in the average score on the Hamilton Depression Rating Scale is commonly used to establish the extent of improvement in older people with major depressive disorder in clinical trials of new psychological or pharmacological treatments. By contrast, qualitative methods involve the collection of observations without using numbers. For example, a focus group might be used to find out which aspects of a respite care service are most helpful to the carers of people with dementia. In clinical research, qualitative data are sometimes used to complement quantitative data.


Both quantitative and qualitative approaches involve the collection of data. However, the collection of data alone does not constitute evidence unless the data are analysed within a model or in relation to a hypothesis. This principle is as true for quantitative data as it is for qualitative data. Quantitative methods usually involve deductive reasoning, whereas qualitative methods usually involve inductive reasoning. Many research papers use deductive reasoning based on quantitative methods to arrive at their main finding and then inductive reasoning to generalise from the particular circumstances of their study to the broader case.


Although quantitative methods such as randomised controlled trials and quantitative meta-analyses (see below) are strongly preferred to other types of evidence by the Cochrane Collaboration and other groups promoting evidence-based medicine, qualitative methods can add considerably to knowledge in a number of fields relevant to mental health, including phenomenology, sociology and anthropology. Qualitative methods can generate types of data that provide clinically relevant nuances to quantitative data.




TYPES OF EVIDENCE


Much of the ‘evidence’ that we use in clinical practice is quite informal and would not pass close scrutiny. This includes clinical anecdotes provided by co-workers, editorials in clinical journals and opinion pieces written by leading exponents of a particular theory or treatment. Much of this ‘evidence’ is not reliable or unbiased, cannot be falsified, has not been subject to independent peer review, and is often best disregarded.


Scientific approaches to clinical evidence include peer-reviewed case studies, case series, case-control studies, cohort studies, randomised controlled trials, secondary analyses, systematic reviews and meta-analyses. Each of these will now be briefly outlined.






Cohort study


Cohort studies are observational studies that employ a longitudinal perspective. They lend themselves to causal thinking. In the typical cohort study, a population sample (the cohort) is followed over time (often years) to see what happens to them. For instance, a cohort study might be used to investigate predictors of depression as people grow older. Information on potential predictors is obtained at baseline, and so is not confounded by knowledge of which participants will ultimately develop depression. The main limitations of cohort studies are the length of time they take to run and the associated expense of mounting them, the difficulty of keeping the cohort intact during a lengthy period of follow-up (people move house, get fed up with the study, or die), and the problem of trying to predict at the beginning of the study which potential predictor variables are likely to be relevant.


Despite these limitations, cohort studies are a good way of generating lists of possible risk factors or protective factors for clinical conditions, including mental health problems. However, these risk factors or protective factors still need to be rigorously tested in randomised controlled trials before they can usefully be introduced into clinical practice. Quite often, when randomised control trials are used in an attempt to modify risk factors that have previously been identified in cohort studies, the interventions turn out not to have any impact on disease incidence or prevalence. For example, oestrogen and non-steroidal anti-inflammatory drugs (NSAIDs) were demonstrated in cohort studies to be protective against the future development of Alzheimer’s disease, but in subsequent randomised controlled trials seemed to have little, if any, protective effect.



Randomised controlled trials


Well-conducted randomised controlled trials (RCTs) generate much stronger evidence for the efficacy of treatments or preventive interventions than observational studies such as case-control studies and cohort studies. As a consequence, most regulatory authorities such as the US Food and Drug Administration (FDA) and the Australian Therapeutic Goods Administration (TGA) require at least two rigorous RCTs before considering a new drug for licensing.


The critical feature of an RCT is an intervention (or treatment) that is randomly assigned to some of the participants, while other participants receive the control intervention (in drug trials, this is commonly a placebo drug or a comparison drug). Random assignment is not the same thing as giving every second person the intervention and the others the placebo. It involves the use of some formal method of generating random numbers (often a computer program) and then using these numbers to assign participants to treatment arms of the study.


The RCT approach may be used for drug trials and for non-pharmacological trials. Thus, the development of a new drug for the treatment of Alzheimer’s disease and the development of a new type of cognitive behaviour therapy (CBT) for depression in older people would both usually involve several RCTs conducted by different researchers. However, human drug trials are further subdivided according to a classification scheme, as follows:







Because of the critical importance of RCTs, most medical journals now require the authors of papers describing the results of RCTs to have registered their RCT on a public access website prior to any people being recruited to the study. The US National Institutes of Health (NIH) clinical trials website (www.clinicaltrials.gov) and the Australian New Zealand Clinical Trials Registry website (www.anzctr.org.au) record information about all clinical trials regardless of whether they involve pharmaceutical agents or non-pharmacological interventions such as CBT. Scientific journals that publish the findings from clinical trials usually now require researchers to describe their research according to the CONSORT guidelines (Altman et al 2001).



Measures of statistical and clinical significance


As previously outlined, the scientific method requires the collection of data and the testing of hypotheses through either observational or experimental studies. Various numerical measures are used to report the results of hypothesis testing. These include p-values, the effect size, odds ratios, confidence intervals and the number needed to treat. The p-value conventionally represents the probability that the finding of an experiment could have happened by chance alone. For example, a p-value of less than 0.05 indicates that if the experiment were to be conducted in a similar manner 100 times over, it would produce a similar result at least 95 times.


Importantly, a p-value does not indicate the magnitude of the difference between an experimental intervention and a control condition. For this, we need an effect size, often shortened to ES and often represented by Cohen’s d. When using Cohen’s d to measure ES, around 0.3 is said to be a small effect, around 0.5 is said to be a medium effect and 0.8 or greater is said to be a large effect. A variety of other statistics may be used to represent the ES, but these will not be discussed here. Another way of representing the difference between two groups is to use the odds ratio (OR). As its name suggests, the OR is the ratio between two sets of odds. If the odds of a person aged 75 having depression is five in 100 (0.05) and the odds of a person aged 40 having depression is 20 in 100 (0.2) then the OR is 0.05/0.2 or 0.25. That is, the 75-year-old has one-quarter the odds of having depression as the 40-year-old.


The confidence interval (CI) is used to estimate the uncertainty in a numerical finding. The wider a CI, the less certain the finding. In health and medical research, a CI of 95% is commonly used, indicating that if the experiment were to be conducted 100 times, 95 times out of 100 you would get a result within the limits of the CI. Thus, an odds ratio of 0.25 might have a 95% CI of 0.2–0.3 or a 95% CI of 0.01–2.5. The first CI is narrow, so one can be confident that the OR is likely to be accurate. The second CI is wide, so one can have little confidence that the OR is accurate.


Another metric that is useful in human clinical trials is the ‘number needed to treat’ (NNT). The NNT can be defined as the number of people who need to be treated with an intervention for one person to respond who would not have responded to a placebo. Treatments with better efficacy have smaller NNTs. A related concept is the number needed to harm (NNH).


Measures of statistical significance often provide little information about the likely clinical significance of research findings. For this, we need to use a different approach. In determining clinical significance it is useful to have data also from normal people—so-called normative data. If a research finding is statistically significant and the intervention brings people with a particular clinical condition (e.g. a major depressive episode) back into the normal range on a clinical scale (e.g. the Hamilton Depression Rating Scale), then the findings are likely also to be clinically significant. Thus, moving a person from a pathological score to a normal score on a scale might suggest clinical significance. Paradoxically, statistical approaches have also been taken to estimate the clinical significance of change scores using a metric called the Reliable Change Index (RCI) (Jacobson & Truax 1991). Alternatively, if a person satisfies diagnostic criteria (e.g. DSM–IV or ICD–10) for a major depressive episode before the intervention and does not meet these criteria after the intervention, it could be argued that they have achieved clinically significant change in their diagnostic status.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 6, 2016 | Posted by in GERIATRICS | Comments Off on Evidence-based care in the community

Full access? Get Clinical Tree

Get Clinical Tree app for offline access