Skip to Main Content
Book cover for Oxford Textbook of Trauma and Orthopaedics (2 edn) Oxford Textbook of Trauma and Orthopaedics (2 edn)

Contents

Book cover for Oxford Textbook of Trauma and Orthopaedics (2 edn) Oxford Textbook of Trauma and Orthopaedics (2 edn)
Disclaimer
Oxford University Press makes no representation, express or implied, that the drug dosages in this book are correct. Readers must therefore always … More Oxford University Press makes no representation, express or implied, that the drug dosages in this book are correct. Readers must therefore always check the product information and clinical procedures with the most up to date published product information and data sheets provided by the manufacturers and the most recent codes of conduct and safety regulations. The authors and the publishers do not accept responsibility or legal liability for any errors in the text or for the misuse or misapplication of material in this work. Except where otherwise stated, drug dosages and recommendations are for the non-pregnant adult who is not breastfeeding.

Conscious and unconscious competency

Clinical research and the placebo effect

Trial design and the randomized controlled trial

Critical review of the literature

Clinical governance and audit

Capacity and consent

Principles of teaching and learning.

This chapter aims to describe key knowledge that, while not directly involved in clinical work, is essential for expert clinical practice and therefore should be regarded as equivalent to basic sciences. Like the basic sciences, this knowledge underpins our clinical practice and guides us when faced with new situations.

Surgery and medicine rely on research and innovation to drive improvements in clinical practice. As an expert surgeon, you must know how to assess these innovations and be able to judge objectively whether or not you should incorporate any apparent improvements into your clinical practice.

A common public assumption is that all medical treatment is beneficial. This may be a marker of the success of modern medicine, but some of the major advances of the last 50 years have been the acknowledgement that medical treatments cause harm as well as benefit, and the development of ways of quantifying the relative risks and benefits of treatments. The most recent manifestation of this trend is the current focus on patient safety—preventing harm to patients within the healthcare system.

Another recent trend is an increased focus on the patient as a consumer. There has been a shift towards healthcare provision structured around patient needs rather than simply providing the healthcare that healthcare workers think is best. There are costs and frustrations associated with this approach, but advantages in patient satisfaction and flexibility of healthcare provision. Healthcare evolves to solve patients’ problems rather than the other way round.

For patients to be actively involved in their healthcare, they need to be informed consumers. Therefore part of the healthcare process is to educate the patient about the options available. Patients now have ready access to plentiful information, most of which is not peer reviewed, and may be covert advertising. This can result in health ‘wants’ being confused with health ‘needs’. Surgical practice needs to evolve to meet these challenges.

The problem with structuring healthcare around patient demands is that healthcare, unlike buying a meal or holiday, does not usually result in an immediate objectively good or bad outcome. If you asked a doctor what treatment they would want from a surgeon, they would want a safe, experienced surgeon and anaesthetist operating in a good environment with motivated conscientious nursing staff. They would be in a fairly good position to judge this.

Patients have a limited number of surrogate measures on which to base their assessment of quality (current measures proposed to define ‘good’ healthcare in the United Kingdom (UK) include time on waiting list, hospital cleanliness, and ‘nurse empathy’. Tables of operative results enable comparison of the quantity and timeliness of operations, but not necessarily the outcomes for the patients). Government ratings may have little or no relationship to the outcomes that actually matter most to the patient or society, e.g. freedom from pain, employment, and good quality of life.

In trying to measure healthcare quality, there is a technical aspect (expertise, outcome), an interpersonal aspect (attitude, behaviour), and a service aspect (accessibility, environment). As patients usually cannot objectively judge the technical aspects, they tend to base their decisions on the interpersonal and service aspects, which are more amenable to measurement.

The technical aspects such as safety and competence tend to be taken for granted, which is unfortunate because the technical aspects are usually the major determinant of long-term outcome. Moreover, the really important outcomes are difficult to measure. For example, there is a long time lag between a poorly sited or poorly designed joint replacement and its ultimate failure. For these reasons, poor quality work can appear cost-effective if not evaluated carefully, and in a way that considers all costs.

In summary, healthcare quality measurement is vitally important, but often heavily flawed because of the temptation to use surrogate measures that are politically expedient or easy to measure, rather than concentrating on the outcomes that actually matter.

Performing an intervention such as an operation will have different results in different people. Humans are not linear predictable systems, which is probably why they have survived so long.

There are many different factors that contribute to the operative outcome: type of operation, preoperative function, risks from the operation, surgical skill, and risks from the anaesthetic. The contribution of these factors can be estimated from analysis of different populations of patients who all undergo the treatment, but this would ignore factors unique to that patient and that surgeon.

Imagine that you are the patient. You don’t really care about all this. You want to know: ‘Will this intervention, performed by this person, benefit or harm me, and by how much?’ (Box 1.1.1).

Box 1.1.1
Patients’ questions you should be able to answer

What is the best treatment?

How can you be sure?

What does this test tell you?

How can you be sure?

How can I be sure you make good clinical decisions?

Why should I undergo this procedure?

Competence is being able to answer these questions, but expertise is demonstrated by the ability to defend your decision—to say why. This chapter aims to cover the foundation skills necessary for this expertise.

To answer this, and be legitimately reassuring, it is necessary to demonstrate that:

You and your anaesthetic and nursing team are safe and competent to carry out the operation and its aftercare

The operation is appropriate for that patient and condition

The likely risks are outweighed by the likely benefits.

Only when these conditions are fulfilled is the risk of harm to the patient minimized. Harm may occur in different forms: medical, financial, and social.

Medical harm is the harm directly resulting from the treatment, and may be physical or psychological.

The notion of medical harm has been crystallized in the minds of the UK public by cases such as Harold Shipman and paediatric cardiac surgery in Bristol. Harold Shipman was a general practitioner who murdered approximately 200 patients by giving lethal doses of opiates. Some cardiac surgeons in Bristol in the 1990s continued to perform complex paediatric operations despite overwhelming evidence of very poor results, heroically documented by an anaesthetist, Steven Bolsin, at great personal and professional cost.

Although these differ in being acts of active harm (Shipman) and passive harm (Bristol) in that Shipman set out to harm patients, but in Bristol the harm was not intended, both sets of events have been seen by the establishment as indicators of inadequate self-regulation by the medical profession. Steps such as revalidation and competence-based assessments have been introduced.

Revalidation is designed to identify poorly performing doctors and ensure that they are retrained. Revalidation is very expensive to perform if such assessments are to be both valid and reliable. Revalidation is likely to evolve to a system of re-taking one’s exit exam every 5 years, as in the United States of America (USA).

Continuing Medical Education is an important part of ensuring continuing competence, but the providers of such education often have ulterior motives, and the end product (any benefit to patients) is not measured, just the process.

In addition to compulsory revalidation, league tables of mortality offer a superficial view of competence, and are being implemented for some surgical procedures in the UK, and have been available in the USA for some time.

As doctors, we need to recognize both the ageing of our knowledge, and the changing nature of our jobs. Specialties such as cardiothoracic and vascular surgery have changed completely in 10 years, and technology such as robotics will have an increasing role in trauma and orthopaedic surgery. It is likely that periodic retraining will be a normal expectation and will be written into job plans in the future.

By medicalizing a normal human condition, it is possible to create ‘diseases’ where none previously existed. This is obviously good news if you have a product that cures the ‘disease’. The widespread usage of antidepressants for a rather hazy definition of ‘depression’ medicalized many thousands of people who may have had no more than transient unhappiness. Direct-to-consumer advertising, allowed in some countries, encourages consumers to self-diagnose and approach doctors for specific branded products. These behaviours encourage expenditure on health ‘wants’ rather than health ‘needs’ which disadvantage the poor and less articulate and result in financial harm (see following section).

Back pain is an enormous social burden. In the UK, 50 million workdays are lost each year and 500 000 people are on long-term incapacity benefits for back pain. The advent of magnetic resonance (MR) scanning has enabled everyone to have a diagnosis, but it is likely that this validation of non-specific ‘abnormalities’ has not benefited patients or society.

There is direct financial harm to the patient and society in every illness through lost earnings and inability to participate in social activities. However, in a single healthcare provider system like the UK National Health Service (NHS), there is also an opportunity cost to every treatment.

For example, an opportunity cost occurs if one spends X pounds on drugs for Alzheimer’s disease patients, then one cannot spend X pounds on hip prostheses. The decision about which treatment is the best option is laden with value judgements. Central questions are:

Does the treatment work as claimed?

What benefit does the patient or society gain from the treatment?

What is the overall cost of the treatment, and what is the cost of not doing the treatment?

Box 1.1.2
Conscious competency—the development and ageing of knowledge (Figure 1.1.1)

One starts as a medical student, unaware of how much one does not know—the unconscious incompetent. An awareness of the breadth and depth of knowledge necessary gradually dawns over the course of medical school, making one conscious of one’s incompetence. Through hard work, study, and practical experience in the early postgraduate years, conscious competence is achieved. As the knowledge becomes internalized and practice becomes automatic, we become the unconscious competent—we can do the job, but we don’t always know why we do what we do.

The danger of unconscious competence is that it is easy for this to slip into unconscious incompetence—a process termed ‘occupational senility’. Continuous Medical Education (CME)/postgraduate education helps us to find areas in which we are incompetent and remedy this. Educational skills such as teaching juniors and colleagues ensure that we remain aware of why we do what we do—nothing tests your understanding of a subject as well as having to explain it to someone else.

 Conscious competency.
Fig. 1.1.1

Conscious competency.

Performing these calculations necessitates value judgements on quality of life, which inevitably have a subjective element.

Engaging in futile or damaging treatment not only causes direct damage and waste, but also deprives another patient of an effective treatment. This makes the continuing presence of homeopathic hospitals funded by the NHS highly illogical, as anything that cannot prove superiority over placebo is just money diverted away from effective healthcare (Box 1.1.3).

Box 1.1.3
Medical business model

To understand the behaviour of device and pharmaceutical companies, it helps to understand their business model.

Developing and testing a new drug or device to the stage when it can be licensed is a calculated high-stakes gamble, costing more than US $1 billion for a new class of drug. There is a substantial risk that defects in the product may be found that will harm some patients. This can lead to not only the loss of the market for that product, but, if handled badly, major damage to the company that makes the product.

Patent laws for drugs and devices give the manufacturer 20 years’ protection from other direct copying, although similar (‘me-too’) products may be allowed to compete. Unfortunately the time to develop and test medical products is rarely shorter than 10 years for a drug, although less for a device. The good news is that once a device/drug is approved for sale, it is generally cheap to manufacture and distribute.

The positive side of this business model is that it favours innovative companies, which should ultimately help patients. An unintended consequence is that once a company has a product licensed, any sales are effectively free money as the cost of production is very low, and the research and development has already been done—it is a ‘sunk cost’—one that can never be recovered. This creates intense pressure to sell as much of the ‘new’ product as possible, whether or not it is a true advance compared to previous products.

From a company’s point of view, it is vital to recoup its initial outlay in research and development as soon as possible, especially if there is a risk that the product may turn out not to be as good as everyone had previously thought, e.g. COX-2 inhibitors, ceramic joint replacements.

The pressure for sales results in an uneasy fusion of marketing and research in the later stages of development and testing. As these trials are industry sponsored, conflicts of interest may arise in the trial design and execution. One cannot be too sceptical when critically appraising such ‘research’. Pharmaceutical companies spend twice as much on marketing as on research and development.

The placebo (Latin for ‘I will please’) effect is more than just what one attributes the apparent success of the ‘snake oil’ peddled by the quack. It is part of the psycho-social interaction that is present in any therapeutic interaction. It is heterogeneous, and has an interesting history.

The placebo effect appears governed by several factors, which appear additive:

The patient: the patient’s expectations of success influence the results, e.g. asthmatic patients experience symptomatic relief when using inert inhalers they believe to be bronchodilators

The doctor: playing the role of an enthusiastic ‘heroic rescuer’ is associated with the highest placebo response, probably due to raising patient expectations. In alternative medicine, the lack of a scientific brake to this runaway enthusiasm goes some way to explain why ineffective treatments ‘work’

There is also likely to be a credibility factor: this can be exploited by looking like the ‘consultant’ in a private healthcare advertisement—the usual cliché of being [male], avuncular, greying, white-coated, with some half-moon spectacles

The doctor–patient interaction: patients with indeterminate symptoms respond better to positive behaviour and a definite diagnosis rather than uncertainty. If the patient has a significant investment, for example, by paying for the service, this is likely to increase their expectations of success

The disease: chronic diseases with a fluctuating course, subjective symptoms with no objective pathology, anxiety, and common self-limiting diseases are all associated with a high placebo response

Treatment and setting: tablets four times a day have a higher placebo effect than those twice a day (although possibly less compliance), and red tablets are more effective than white. Elaborate rituals and complex devices increase your chance of a good placebo effect, although this on its own does not justify an MR scan. Surgery itself has a marked placebo effect, demonstrated by the improvement experienced from a sham operation by Parkinson’s disease sufferers in a stem-cell trial, and the efficacy of sham acupuncture.

It is sometimes argued that it does not matter what method is used, as long as the patient gets better. This sounds plausible, but if one is spending public funds on healthcare, one has a duty to ensure best value. Therefore treatments that cannot prove their effectiveness over a placebo should not be publicly funded.

By understanding and enhancing the placebo value in medicine, one can maximize patient satisfaction and the chances of patient compliance, both of which are likely to benefit your patient.

In the 1960s, prior to gastric acid-suppression drugs, gastric free zing was commonly performed. By freezing the gastric mucosa, acid-producing cells were damaged and acid formation was suppressed allowing the ulcer to heal and the patient was cured.

In an era where acid suppression drugs are cheap and effective, it is difficult to understand the disease burden of peptic ulceration to patients and society in general. It was found that reducing the temperature of the gastric mucosa to 5–10°C markedly reduced acid secretion. The exciting new technique of freezing the gastric mucosa was, in effect, a miracle cure for many patients.

In 1962, a trial of 24 patients by a past president of the American College of Surgeons showed that all had relief of symptoms of duodenal ulcers 6 weeks after having had a gastric freezing operation. More than 2500 gastric freezing machines were sold, and hundreds of thousands of freezings were carried out. Patient satisfaction was high, and a review of the procedure commented that ‘the method is easy and the clinical results are astoundingly good’.

Although more than 100 articles about gastric freezing were produced in the following years, it was not until 1969 that a placebo-controlled trial of gastric freezing was published. This study, with approximately 75 patients in each arm had sufficient power to detect a 20% difference between the treatments, if one existed1. There was no difference between the treatment and control groups.

In short, a lot of time and energy and money had been expended on a placebo treatment. In addition active harm was done to patients—fatalities ocsurred in patients under going the treatment—and the financial harm was immense. Without the insight that properly conducted trials provide, doctors are little more than purveyors of snake-oil (Figure 1.1.2).

 A product whose only benefit can be a placebo effect.
Fig. 1.1.2

A product whose only benefit can be a placebo effect.

To prove if one treatment is better than another. What sort of trial you do depends on what you are trying to prove:

A cohort trial involves following groups of patients for a period of time and seeing what happens to them. For example, seeing how long different sorts of hip replacement last in different patients

A case–control trial involves finding asymptomatic patients who match your symptomatic ones and seeing how they compare. For example, to understand the pathophysiology of osteoarthritis of the hip, you might find symptomatic patients and compare them with matched patients (age, weight, activity) who are not symptomatic

A controlled trial is one where the active treatment (the one you are testing) is compared with either an inactive one (a placebo) or another sort of treatment, e.g. the current standard. Comparing one sort of hip replacement with another would be a controlled trial

A ‘blinded’ trial is one where either the patient or the researcher does not know which treatment that patient is receiving. A ‘double-blinded’ trial occurs when neither the researcher nor the patient knows which treatment the patient is receiving. Blinding reduces the risk of bias and this is an important part of trial design

A randomized trial is one where the patients are randomly allocated to different treatments (Box 1.1.4). If one is going to do a trial, while the process of randomization is important; even more important is that the process by which patients are allocated is concealed from the researcher, e.g. sealed opaque envelopes.

Box 1.1.4
Austin Bradford-Hill (1897–1991)

Sir Austin Bradford-Hill (Figure 1.1.3) pioneered the use of the randomized controlled trial in the assessment of streptomycin for the treatment of tuberculosis. Originally used in agricultural experiments, the randomized controlled trial is now the benchmark by which all interventional medical research is judged.

 Sir (Austin) Bradford (‘Tony’) Hill, by Godfrey Argent. © National Portrait Gallery, London.
Fig. 1.1.3

Sir (Austin) Bradford (‘Tony’) Hill, by Godfrey Argent. © National Portrait Gallery, London.

Trial design is more important than trial analysis: if the design of a trial is flawed, no amount of fancy analysis can remedy this. It is vital that you do some background reading and talk to someone with experience of trial design before you start collecting data. Any trial involving human subjects also needs ethical review before it can start.

A literature search should be performed to see whether anyone has already done the trial, but also to generate ideas about how best to perform the trial.

When thinking about how many patients you will need for your trial, you need to consider incidence and prevalence. Incidence is the number of new cases of a condition, e.g. fractured neck of femur. Prevalence is the number of people with a condition at a particular point in time, e.g. back pain. If your disease is uncommon, recruiting a sufficient number of patients to be able to adequately power the study will be difficult.

Any trial will need ethical approval. All NHS Trusts and universities have a structure of ethics committee(s) for approving and monitoring clinical research. These committees are charged with ensuring that all research is of good quality and safeguards the rights of the patients and the reputation of the institution.

Informed patient consent is, excepting a few defined hyperacute situations, an essential component of any research, and the patient consent forms and accompanying information will be carefully scrutinized by the ethics committee.

Bias occurs when you fail to measure what you intend to/should measure. This may happen for a variety of different reasons—an example might be if you did a satisfaction survey of patients you saw in outpatients following hip surgery. You might achieve a very good satisfaction score, but by limiting your sample to those who are able to come to your outpatients, this may not be an accurate reflection of the true picture. Bias is therefore the inclusion of an error that distorts the trial in a non-random way (Box 1.1.6).

Box 1.1.6
Common forms of bias

Selection bias: the patients used may not be representative of the usual patient population group. The sickest patients are often excluded from trials

Observer bias: there may be (unconscious) bias in the assessment by observers—minimized by blinding and concealment

Measurement bias: flawed measurement may favour one group over another

Confirmation bias: the tendency to look for factors that confirm rather than disprove our ideas

Publication bias: trials with negative results are less likely to be published.

Confounding occurs when you fail to measure or spot an extraneous factor that accounts for your findings. Say you had a group of patients in whom you measured premature failure of hip prostheses and doughnut consumption, and found a strong relationship. Does this mean that doughnuts cause prosthesis failure? Or might it be that there is another variable that you have not measured that would explain this—a confounding factor?

Bias can be minimized by blinding, randomization, and concealment, but it is not always possible to randomize and blind in surgery.

The Hawthorne effect is named after experiments done at the Hawthorne factory in the 1920s when factory workers were assembling relays for telecommunications equipment (Figure 1.1.4). Researchers looking for the optimal conditions for factory work increased the ambient light and found that production rates improved. They then reduced the ambient light and found that production rates improved again. Eventually the researchers deduced that the presence of the researchers was the over-riding influence on the workers’ performance. Therefore the Hawthorne effect is that just by measuring something, one changes it. Having a control group ensures that the Hawthorne effect does not affect the results.

 Image of Hawthorne workers. From Gale, E.A. (2004). The Hawthorne studies – a fable for our times? Quarterly Journal of Medicine, 97, 439–49.
Fig. 1.1.4

Image of Hawthorne workers. From Gale, E.A. (2004). The Hawthorne studies – a fable for our times? Quarterly Journal of Medicine, 97, 439–49.

Box 1.1.5
Ethical principles

Beneficence: do good

Non-malfeasance: avoid doing harm

Autonomy: respect for patients’ wishes

Justice: fairness.

The consumer of knowledge can never know what a dicky thing knowledge is until he has tried to produce it.

F.J. Roethlisberger, investigator at Hawthorne.

Objective or ‘hard’ outcomes, such as death, are preferred in research as there is little debate about their significance. Subjective or ‘soft’ outcomes, such as a change in a pain score, may not be valid or reliable across different populations of patients. If a validated measure of outcome is available, this makes design easier. If any of the measures used are in any way subjective, e.g. fracture angulation or code, it is best to have more than one independent observer and measure agreement between these observers.

If the ‘hard’ measure is rare or difficult to measure, then surrogate measures may be used. Death due to pulmonary embolism following joint replacement is quite rare, but using deep vein thrombosis (DVT) as a surrogate is fraught with difficulty as DVT is very common, but when limited to below the knee, does not seem to be associated with adverse outcomes.

The gold standard for pharmaceutical research is the double-blind placebo-controlled randomized controlled trial (RCT). RCTs are also the gold standard in surgery, and provide a definitive answer, e.g. for arthroscopic surgery in osteoarthritis. However, RCTs of any sort are uncommon in surgery. This is partly because in order to ensure the full placebo effect in the control group, one must perform a sham operation, and this is ethically and practically difficult. A pragmatic alternative is to randomize based on expertise—if there are two groups of surgeons available with expertise in variants of a surgical procedure, one can randomize patients between these two groups.

Another technique that is used to deal with the difficulty in conducting large RCTs is to use meta-analysis to combine the results of similar small trials.

In an ideal world, all medical and surgical practice would be based on evidence from well-conducted, multicentre, double-blinded, randomized placebo-controlled trials. Unfortunately such trials are very expensive to organize, and therefore are only performed to answer the most large-scale health questions, e.g. the CRASH (Corticosteroid Randomisation After Significant Head injury) and CRASH2 trials relating to the role of tranexamic acid in patients with major haemorrahage from trauma.

Meta-analysis is a structured review of all the trials relating to a particular research question. It is used when there are many small trials that relate to a research question, but none of them have the power to resolve the question on their own. The quality of evidence from each trial is weighted in a structured and predetermined way to allow a definitive assessment to be made. Meta-analysis may have the power to quantify benefit or harm that smaller trials may not be able to do, particularly if these effects are uncommon.

Careful judgement is necessary to combine the results of several trials in a way that does not prejudice the final result. The Cochrane Collaboration (Box 1.1.7) is an independent organization that organizes and validates meta-analyses of clinical questions.

Box 1.1.7
Archie Cochrane (1909–1988)

The Cochrane Collaboration (http://www.cochrane.org) is named after Professor Archibald Cochrane (Figure 1.1.5). Regarded as the father of evidence-based medicine, he firstly trained as a laboratory scientist, and then studied medicine before the Second World War.

 Professor Archibald Cochrane.
Fig. 1.1.5

Professor Archibald Cochrane.

Through his work on infectious disease he became interested in epidemiology and wrote an influential report on how to improve the NHS.

The report’s stark logic championed the use of the RCT to ensure the best use of NHS funding. The book had a profound effect on medical thinking and policymaking.

In the 1990s, intravenous colloid fluids, e.g. Haemaccel, Gelofusine were widely used for resuscitation as they were effective at improving a patient’s blood pressure quickly, and the large protein molecules did not leak out of damaged capillaries as quickly as crystalloid. The downside of this is that once they did leak out, they tended to cause resistant tissue oedema, which is bad if the tissue involved is the brain.

In 1998 a meta-analysis performed by the Cochrane Collaboration suggested that colloids gave no benefit over crystalloid fluids, e.g. saline, and appeared to cause harm (up to 5% increase in mortality) in critically ill patients. This, together with their high cost (20 times that of crystalloid), are why colloids are not now used in resuscitation.

The volume of medical literature is vast—there are more than 40 000 biomedical journals, and this number doubles every 20 years. The quality does not. How can you filter this soup of raw data? How can you work out whether something is true or not? If true, can you apply this new knowledge to your daily practice?

The Bulstrode criteria for medical literature are short and sharp:

Do I understand it? What is the author claiming?

Do I believe it? Are there significant flaws in the design, execution, or analysis?

Do I care? Is this relevant to my practice?

While the first and last of these depend on subjective judgements, the key skill is the ability to read a paper, spot flaws, and be able to judge whether it should change your clinical practice.

Professor David Sackett, one of the originators of evidence-based medicine, developed a set of criteria that expands on the Bulstrode criteria to help you evaluate whether a trial is valid and applicable to your practice (Box 1.1.8).

Box 1.1.8
Sackett’s tests
Was the assignment of patients really random?

If patients were not assigned to treatment and control groups randomly, and/or the assignation not concealed adequately from those treating and assessing the patient, then it is likely that an unacceptable level of bias will be introduced.

Were all clinically relevant outcomes reported?

If, in a purely hypothetical situation, a new treatment centre starts up next to your hospital and its surgeon malaligns the hip prostheses, a subjective measure such as patient satisfaction on discharge might be good, but a measure of important long-term function such as return to employment or time to failure/revision might be poor.

Were the study patients similar to your own?

If the patients entering the study were markedly different from your own patients, it may be that the conclusions of the study might not be relevant to your situation. A shoulder reduction technique that has been validated on Japanese patients with an average body mass index of 20 may be less helpful when you are faced with a 200-kg Hell’s Angel with their first dislocation.

Were the results clinically important?

If a difference was demonstrated by this treatment, was the difference large enough and important enough to justify changing your practice?

Is the therapeutic manoeuvre feasible in your practice?

It is necessary to think about whether the therapy being tested is viable in your clinical practice. Even if not, you may be able to plan to include it in a future service expansion.

Were all patients who entered the study accounted for at its conclusion?

If one was running a trial on behalf of a commercial sponsor, and some patients had inconvenient results, some people might be tempted to label these patients as ‘lost to follow-up’ and discount them from the analysis. Trials published now should include a CONSORT diagram: this is essentially a flowchart that demonstrates what happened to all the patients entered into the trial (http://www.consort-statement.org). Reputable trials are prospectively registered with published protocols (http://www.controlled-trials.com).

The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.

The term ‘evidence-based medicine’ was coined by a group of physicians at McMaster University, Ontario, in the 1980s. The aim was to encapsulate the information search and analysis techniques that had been learned from epidemiology in a form that could be used at the bedside. Although this may seem like second nature now, this was in an era before the Internet, PubMed, Google Scholar, and online journals.

One of the problems in traditional medical practice is that doctors tend to carry on doing the things they were taught in medical school. One of the arguments against old-style medical school curricula was that it did not teach the skills for life-long learning. The integration of the science behind ‘evidence-based’ practice into the medical school curriculum has ensured that future generations of doctors will be able to identify and challenge outdated or dangerous practice from an objective viewpoint.

With the many claims and counter claims of patients, healthcare providers, researchers, pharmaceutical and device manufacturers, one needs a league table to be able to work out which evidence trumps others (Table 1.1.1).

Table 1.1.1
Levels of evidence for interventional studies
LevelTherapy

1a

Systematic review of similar RCTs

1b

Single RCT of good quality

1c

All or none (e.g. all treated patients cured/died)

2a

Systematic review of similar cohort studies

2b

Single cohort study

2c

‘Outcomes’ research; ecological studies

3a

Systematic review of similar case-control studies

3b

Single case–control study

4

Case-series, poor quality cohort, and case–control studies

5

Expert opinion or based on physiology or lab research

LevelTherapy

1a

Systematic review of similar RCTs

1b

Single RCT of good quality

1c

All or none (e.g. all treated patients cured/died)

2a

Systematic review of similar cohort studies

2b

Single cohort study

2c

‘Outcomes’ research; ecological studies

3a

Systematic review of similar case-control studies

3b

Single case–control study

4

Case-series, poor quality cohort, and case–control studies

5

Expert opinion or based on physiology or lab research

It is all very well having found and assessed the best evidence, but you have been wasting your time if you cannot put it into practice. Approaching people and telling them that their practice is out of date and/or dangerous does not always result in the desired outcome (see Bristol, discussed under Medical Harm earlier).

Unless there is a pressing urgency, e.g. ongoing risk of significant harm to patients, it is necessary to prove that there is a problem with the current system. The way to do this is through audit. Audit is different from research in that audit is ensuring optimal usage of current knowledge, whereas research is about creating new knowledge.

An audit is performed to measure the organization’s performance against agreed standard measures of performance. These might be international or national guidelines (e.g. National Institute for Health and Clinical Excellence, NICE), ‘best practice’, or College standards.

Guidelines are advisory summaries of evidence that are used to guide treatment. They are not supposed to be rigidly interpreted, but if one is going outside their advice, one should have good reasons. Protocols are designed to be rigid: they are standardized routines that can be used to streamline and minimize defects in a process. Both guidelines and protocols have inherent dangers in that they tend to stop people thinking about what they are doing, and may stifle innovation.

Any guideline or protocol must be both exhaustive and exclusive. It must cover all eventualities that will occur, and must reliably exclude other conditions that may look similar. Many protocols, elegantly simple in their first draft, can become so complex as to be unworkable when revised after testing. This is a measure of the inherent complexity within healthcare, which is why we need doctors who have the broad range of knowledge and skills to manage these problems, rather than just technicians.

Make everything as simple as possible, but no simpler.

Albert Einstein.

Audit is the process by which we ensure that we are meeting the standards that we set ourselves. Audit differs from research in that research is looking for new knowledge. Audit is about ensuring that present knowledge is being used in the most productive fashion. Audit may be used to generate and refine research questions, but is usually used in the context of a quality improvement framework.

Audit can be prospective, but is usually retrospective—at least in the first instance. For example, to audit time to theatre for open fractures, an easy way would be to collect all patients with a discharge diagnosis of open fracture. However, this would miss patients who had died or been transferred. A better way would be to prospectively collect the data on patients who come in through the Emergency Department with open fractures.

Types of audit:

Basic clinical audit: throughput, morbidity, mortality, outcome, adverse events

Patient audit: satisfaction survey

Notes review

Benchmarking: comparison between units e.g. TARN (Trauma Audit Research Network—http://www.tarn.ac.uk)

National audits: e.g. Trauma: Who cares? (2007) (National Confidential Enquiry into Patient Outcome and Death, http://www.ncepod.org.uk)

Audit is an important part of Clinical Governance, being the way that clinicians measure their own performance against their peers, and also a structured way of demonstrating necessary service development. Audit data can be compared against agreed standards/national guidelines e.g. NICE/Scottish Intercollegiate Guidelines Network (SIGN)/‘best practice’ from the literature or College.

 The audit loop.
Fig. 1.1.6

The audit loop.

The most difficult part of the audit loop is usually making the change, and this is why audit usually takes place within the framework of Clinical Governance. This provides the oversight and a mechanism for pushing through the process changes necessary. If the audit requires a questionnaire, time spent researching the format and construction of questionnaires will substantially increase the quality of the information garnered.

This rather nebulous term is derived from the notion of Corporate Governance, used in business. Corporate Governance is the process of ensuring that the people who run the company (senior management) act in accordance with the interests of the people that own the company (the shareholders, represented by the Board).

Clinical Governance on the other hand means the process of showing that the work you do is as close to best practice as is feasible, and therefore is a quality assurance process. It was defined as:

A framework through which NHS organisations are accountable for continuously improving the quality of their services and safeguarding high standards of care by creating an environment in which excellence in clinical care will flourish.

Scally and Donaldson (1998).

From a clinician’s viewpoint, Clinical Governance is important, as apart from clinical adverse incidents it is the only forum in which to demonstrate and resolve problems with clinical service delivery. Clinical Governance incorporates what were Morbidity and Mortality meetings, but should be proactive, identifying problems before they result in significant patient harm, incorporating best practice and a framework for service development/quality assurance.

Your decision as to whether to offer a surgical solution to a patient’s problem is based on clinical diagnosis and investigations. An expert should have insight into their own process of assessment, to be able to teach others but also to understand how and why things go wrong.

Much time, ink, and paper has been expended in pursuit of understanding the process of diagnosis. The short answer is ‘we don’t know’. However, some basic principles seem to be that:

It takes about 10 000 hours to become an expert at something (chess, golf, orthopaedic surgery)

Experts use pattern recognition to make diagnoses

Teaching pattern recognition to non-experts will result in superficial expertise, but without the resilience and depth of knowledge to be able to deal with new situations safely

Novices either work forwards from the history and examination, or backwards from diagnoses, or both. It may be that the ability to think through problems both forward and backwards is a precursor to expertise

Problem solving is largely content-specific, i.e. being good at one clinical area does not necessarily imply that someone is good at another

Clinical vignettes—examples of patients are integrated with the basic scientific knowledge and assembled by novices into mental models of diseases. This ‘case-based learning’ is not a substitute for hands-on learning, but may accelerate the rate of acquisition of mental models.

There is no such thing as clinical reasoning; there is no one best way through a problem. The more one studies the clinical expert, the more one marvels at the complex and multidimensional components of knowledge and skill that she or he brings to bear on the problem, and the amazing adaptability she must possess to achieve the goal of effective care.

Norman (2005).

Research into clinical reasoning helps us understand some of the processes that underlie clinical decision-making.

Framing bias: this is the context in which you see a patient. If you saw a patient with a hot swollen ankle in the Emergency Department, you might think of a fracture or septic arthritis, but in an outpatient clinic, a rheumatological condition would probably be your first thought

Availability bias: is all the information necessary to make the decision available?

Representativeness bias: is the information used to make the decision representative of the true situation? We tend to over-weight information from sources we believe reliable and our cultural values may also affect the way we use information to make judgements

Anchor bias: is the starting point for one’s assessment rational. If one has seen two osteosarcomas in 1 week, this may alter your perception of the likelihood of further such cases, making you more likely to spot one, and may make overdiagnosis likely

Overconfidence: this can be a response to the lack of certainty in a situation. If the risk inherent in a procedure is ignored because the person doing the procedure regards them as insignificant, this will flaw the consent process and make it invalid. The courts have made it abundantly clear that risk is what the patient perceives, not what the clinician thinks the patient perceives or should perceive.

 Clinical governance.
Fig. 1.1.7

Clinical governance.

If a trainee has persistent problems with poor clinical judgement, then this is usually handled through their training programme. If a colleague shows poor clinical judgement, this should be taken up through the Director/Divisional Director/Medical Director. If these avenues do not resolve the problem, in the UK there is the National Clinical Assessment Service, part of the National Patient Safety Agency (http://www.npsa.nhs.uk).

As investigations have become more complex and are used earlier in the disease process, their flaws have become more apparent and caution is needed in interpreting the results. There are risks and benefits with all investigations but as the treating doctor, there is a duty of care to use them appropriately. Some tests may give misleading results if used in unsuitable groups of patients.

A patient with longstanding mechanical back pain has an MR scan of their back. An abnormality is noted but is perceived by the radiologist to be the result of normal wear and tear, but is mentioned in the report. The patient and their family may put pressure on the doctors because ‘something must be done’. An operation may be performed that is both unsuccessful and/or has complications in the surgery, anaesthesia, or recovery period. Significant medical and financial harm has occurred to both the patient and society.

In this situation, the scan could be said to be a false positive—the scan implies that there is a significant abnormality that is responsible for the patient’s pain, whereas in fact this is not the case. It could reasonably be argued that it was not the scan that was a false positive, but the diagnostic process as a whole. The MR scan may be very accurate at finding abnormalities. However, what is required is a scan that can:

Find abnormalities, but also

Predict which abnormalities are responsible for the pain, and further

Predict which of these abnormalities can be surgically remedied.

This is why we need clinicians to interpret these scans in the light of clinical judgement. The ability to weigh all this grey information factors accurately to produce an appropriate treatment plan is one of the hallmarks of an expert clinician.

Both society and individuals seem far more tolerant of false positives than false negatives. It is seen as perfectly acceptable to provide too much treatment rather than not enough, whereas in reality both may result in similar rates of harm in the widest sense. A good, if politically charged, example of this would be screening for breast cancer:

For every 2000 women invited for screening throughout 10 years, one will have her life prolonged. In addition, 10 healthy women, who would not have been diagnosed if there had not been screening, will be diagnosed as breast cancer patients and will be treated unnecessarily.

Gotzsche and Nielsen (2006).

Such information is difficult to communicate to the public. By way of contrast, one does not have to open too many newspapers to find sensational reports of a catastrophic ‘missed diagnosis’: these are the opposite: the false negative.

These two situations, the false positive and false negative, can be put into a table together with the true positive and true negative (Table 1.1.2).

Table 1.1.2
The true/false positive matrix
Reality

+

Test

+

True +

False +

False −

True −

Reality

+

Test

+

True +

False +

False −

True −

Apart from the cost, anxiety, and unnecessary investigation that a false positive test entails, there is another form of damage that of which an orthopaedic practitioner should be particularly aware. The radiation dose used in diagnostic testing, particularly in computed tomography (CT), is very large (Table 1.1.3).

Table 1.1.3
Radiation doses from different procedures
Diagnostic procedureDose (mSv)CXRsBackground radiation

XR limb

0.01

0.5

1.5 days

XR chest (PA)

0.02

1

2.4 days

XR lumbar spine

1.3

65

6 months

CT head

2.0

100

9 months

CT abdomen/pelvis

10

500

3.3 years

CT scanogram

40

1000

6.5 years

Diagnostic procedureDose (mSv)CXRsBackground radiation

XR limb

0.01

0.5

1.5 days

XR chest (PA)

0.02

1

2.4 days

XR lumbar spine

1.3

65

6 months

CT head

2.0

100

9 months

CT abdomen/pelvis

10

500

3.3 years

CT scanogram

40

1000

6.5 years

CT, computed tomography; CXR, chest x-ray; PA, posteroanterior; XR, x-ray.

The use of the ‘scanogram’ (CT head/neck/chest/abdomen/pelvis) in blunt trauma is associated with significant risks of lethal malignancy—approximately 1:500 for children and 1:3000 in adults. Patients have a right to understand the risks and benefits of investigations that are being performed, and the clinician has a duty to ensure they have access to information to make those choices.

Consent is the legal framework used to cover the agreement between a patient and a doctor to undertake a procedure. There are local rules for how consent is obtained and variation between different countries’ legal frameworks; therefore general principles will be discussed.

For consent to be informed a patient must have capacity to give this consent. The rules governing the assessment of mental capacity have been clarified in English law. The key components of this are:

A patient must be assumed to have capacity, irrespective of age/disability/behaviour/beliefs/illness (including mental illness) or the fact that you may disagree with their decision. Each decision as to whether a patient has capacity must be a new assessment, and it is possible that a patient can have capacity in one situation but not another. All efforts must be made to give the patient the maximum chance at establishing capacity.

The Mental Capacity Act (2005) uses four tests to establish capacity. The patient must be able to:

Understand the information necessary for them to be able to take a rational decision about consent

Retain the information long enough for them to be able to

Weigh the information and be able to

Communicate a decision about the consent to others.

If the doctor’s opinion is that the patient lacks capacity to consent, and it is not an emergency, then medicolegal advice should be sought, through the hospital’s usual channels.

In an emergency, one may do what is necessary to preserve life and limb, but no more. Verbal consent should be sought, but is not essential.

Advance directives may still over-ride life-saving treatments, but only if they specifically include life-threatening situations.

Advance directives are now legally binding in parts of the UK. In an emergency, an advance directive may be over-ridden if the patient’s condition is life threatening, unless the directive specifically refers to life-threatening condition being included, e.g. blood transfusion by Jehovah’s Witnesses.

If there is doubt about a local situation, medicolegal advice should be sought.

Informed consent does not mean: ‘sign here and I will do the operation’. Consent is a two-way process and may need to include negotiation about what will and will not be performed, and a frank discussion of the likely outcomes of these actions. Consent should include an understanding of what will occur if the operation does not take place.

Part of the process of negotiation is to manage the patient’s expectations. If a patient has unrealistic expectations of surgery, this is the time to address this. Failure to understand the risks and likely outcomes is far more likely to lead to dissatisfaction and adverse outcomes for all concerned.

The notion of adequate risk disclosure has changed in the last few years from that of what a doctor deems necessary to what the patient deems necessary. The old legal definition of ‘acceptable medical practice’ (the Bolam test) being defined as acting ‘in accordance with practice accepted by a responsible body of medical opinion’ has been superseded by a patient-focused test—‘what a reasonable patient would want to know’ (Box 1.1.9).

Box 1.1.9
Risk is defined by the patient, not the doctor
Chester v Afshar [2002]

A UK neurosurgeon consented a patient for a multilevel discectomy. Prior to this the patient had been clear that she had wanted to avoid surgery. The patient sustained cauda equina damage during surgery. The court found that the doctor had been negligent by not specifically warning the patient of the 1–2% risk of serious neurological damage, as this would have dissuaded the patient from surgery. The Law Lords commented:

A surgeon owes a general duty to a patient to warn him or her in general terms of possible serious risks involved in the procedure. The only qualification is that there may be wholly exceptional cases where objectively in the best interests of the patient the surgeon may be excused from giving a warning…In modern law medical paternalism no longer rules and a patient has a prima facie right to be informed by a surgeon of a small, but well-established, risk of serious injury as a result of surgery.

Rogers v Witaker [1992]

An Australian ophthalmologist consented a patient for surgery. The patient specifically asked about the risk of damage to the ‘good’ eye, and was not told of this risk (approximately 1:14 000). Sympathetic ophthalmia occurred, making the patient blind. The doctor was found to be negligent.

Risks should be disclosed if the risk is more common than 1:1000, or is very serious—death or serious disability. The advice about risks must be recorded on the consent form. Risk should be numerically quantified and documented whenever possible. If a patient enquires about a specific risk they must be advised of the likelihood of this occurring.

Guidance on consent was published by the General Medical Council in 2008, which takes account of these changes and is freely available through their website (http://www.gmc-uk.org).

There are many examples of the public failing to understand medical risk or misinterpreting information, sometimes due to sensational or misreporting of the media, e.g. risks from MMR immunization.

Expression of risk in percentages is a good start, but may not be readily interpreted by patients with linguistic or educational barriers to understanding. Other approaches to help patients understand risk include the use of:

Crowd diagrams: these are a visual representation of a population of patients with a condition and the number of positive/negative outcomes can be demonstrated by shading an appropriate number of these (Figure 1.1.8)

Number needed to treat (NNT): as illustrated in the breast cancer screening example given earlier. This is an expression of the number of patients that need to be treated (or screened) to give one unit of benefit/harm, e.g. NNT of primary repair for traumatic shoulder dislocation to prevent recurrent dislocation over 10 years is approximately 2. This means that two people have the operation to stop one person from having recurrent dislocations

‘Bone age’ or ‘joint age’: this describes the patient’s condition by using an age-related comparison, e.g. ‘You have the joints/bone density of an 80-year-old’. Although NNT appeals more to doctors rationalizing decision-making, it seems that presenting information in an age-related way personalizes the risk in a way patients can more easily understand.

 Crowd diagram.
Fig. 1.1.8

Crowd diagram.

As an expert clinician, you have a role and a duty to educate. You did not become an expert without the educational input of a large number of teachers. From Figure 1.1.9, as a conscious competent at the end of postgraduate training, you will be at the peak of your educational ability as a ‘conscious competent’.

 Kolb’s model of learning.
Fig. 1.1.9

Kolb’s model of learning.

The danger is the ease of slipping into being an unconscious incompetent—you can do the job, but don’t know why you are doing what you are doing. Regular teaching helps ensure that knowledge and skills are kept up to date. We can all remember inspirational teachers who appeared to effortlessly communicate complex ideas in a fun way. Good teaching does not just happen—it is the result of careful preparation of content, structure, and presentation, and the following section aims to help with this.

The model of learning developed by Kolb is helpful in considering medical teaching. Trainees are generally not short of concrete, i.e. real-life, experiences, but the trick is to teach in such a way as to build on these experiences and incorporate the theory and facts from textbooks into deeper knowledge.

When planning teaching, there should have some overall aim stated—what should the student be able to do/understand at the end? This aim can be broken down into component goals—specific chunks of learning that build together to achieve the aim. Telling students these aims and goals at the beginning of teaching helps them understand how the information is going to fit together into something that will be useful. This increases the chance that teaching becomes learning.

The most common teaching an orthopaedic surgeon will do is teaching practical skills, and for this it is helpful to have a minimal structure that helps organize how you transmit the information in a way in that is likely to be understood and retained.

Explain

Demonstrate

Imitate

Practice.

By explaining what you are going to do, and breaking it into stages, you allow the learner to look for the different stages when you put them together as a demonstration. The learner then imitates what you have demonstrated, and then practises this to a level of competence.

When you are teaching practical skills it is important to be able to help the person you are teaching to improve. If you think back through your training, you will be able to think of times when this has not been done particularly well. A good acronym for this is PQRS:

Praise: insist that the student identifies at least one thing they have done well

Question: ask the student what they would do differently next time

Reflect: ask the student and/or group to explore other ways of improving performance

Summarize: the student should be able to identify one thing they have learned.

While lectures are superficially efficient at teaching, the amount of learning is highly variable, and difficult for the lecturer to gauge. Dividing large groups into small groups for teaching has many advantages:

Uncertainty and lack of understanding are more likely to be voiced in a small group, and will be either corrected or voiced by the group

Lazy people are forced to contribute. Quiet people are more likely to contribute (less of a problem with orthopaedic trainees)

It is possible for the teacher to evaluate how much the student has understood

It is possible to integrate other skills such as researching evidence and presentation skills.

Small-group teaching is initially quite challenging for many teachers as there is an implicit loss of control of the whole process, and there is a danger the whole thing goes wrong (very rare, providing the task is well structured).

Specific small-group techniques that may be useful are:

Problem-based learning: give small groups of students a clinical problem to solve, and send them away to research it and produce a presentation based on their research for all the groups

Buzz groups: when you are giving a lecture, create a task or activity that people do, initially on their own, and then discuss with their neighbour(s). Pick a few volunteers to find out the results and incorporate these back into your presentation. This acts as a break, wakes people up, allows you to test that they have understood what you are talking about, and allows you to incorporate the students’ ideas into your teaching.

Bedside teaching is a particularly difficult skill to do well, yet is the one that has the most chance of inspiring students. A common complaint from students is that they rarely have the opportunity to examine patients with a senior doctor to guide them and give feedback.

Bedside teaching gives an opportunity for the expert to articulate their thought processes so that students may learn how an expert weighs different factors to make decisions. It provides an excellent opportunity to involve the patient both in decisions regarding their management and in the education process, and the evidence is that patients appreciate and enjoy this.

The ubiquity of electronic presentation using data projectors has spawned a number of habits that can detract from learning. There are a number of things that you can do to lessen the pain for the audience:

The productive attention span is about 20 minutes. After this time do something different and interactive like using a buzz group activity

Format your presentation in PowerPoint 97 and bring on a USB drive and CD, but also email it to yourself as a backup. Beware video in a presentation, as there are many different subformats which are usually incompatible. If you are planning to use video, it is safest to always take your own laptop

The most readable format of projection text is white or yellow text on a dark background (blue or green). Red text may look good on a screen, but projects poorly. This is particularly important for people with poor vision/colour-blindness. Pure white backgrounds are tiring for the audience and should be avoided

Use the 6 × 6 rule: do not put more than six lines of six words on each slide. This will prevent you just reading each slide, which is very annoying for the audience

End with a ‘take home message’ summarizing your talk in three points in nine seconds and (approximately) 27 words. This is the ‘sound bite’ packaging of key information that you want people to remember from your presentation.

Assessment is the term given to establishing whether the students have learned anything or not. Although this may sound obvious, care needs to be given to establishing the purpose of the assessment. Is the purpose:

To rank the students in order of performance?

To establish which students have reached a minimum standard of competence to go on in their training?

Training has generally moved away from the former towards the latter. Assessment should therefore be tied very closely to the curriculum: students should have a very clear idea of what is to be tested and the pass rate should be very high, as only the essentials—the core curriculum—must be tested, but they must be passed.

Teachers are always worried about examination materials, particularly practical (OSCE—objective structured clinical examination) stations getting out into the students’ domain. This is going to happen anyway. The consequences of this can be minimized, and the effect harnessed by giving open access to certain assessment materials. For example, by publishing model OSCE stations with their marking schema, one ensures that the overall performance is much improved, this is: ‘using the tail to wag the dog’.

A simple way of evaluating education is to use an anonymous questionnaire. Questionnaires need careful construction to ensure that you receive valid and reliable responses.

Medicine is a science of uncertainty and an art of probability.

Sir William Osler (1904).

This chapter has provided an overview of some of the key supporting skills for expert clinical practice. There is a limited amount of material that can be included in such a textbook as this, and therefore this chapter is little more than a dégustation menu.

The hope is that reading this chapter will equip the reader to go out and negotiate some of the uncertainties that will face them in everyday practice, and also to act as a roadmap for further reading for those who are looking for more information.

Cochrane,
A.L. (
1972
).
Effectiveness and Efficiency: Random Reflections on Health Services
. London: Nuffield Provincial Hospitals Trust.

General
Medical Council (
2008
).
Consent: Patients and Doctors Making Decisions Together
. London: General Medical Council.

Gotzsche,
P.C. and Nielsen, M. (
2006
).
Screening for breast cancer with mammography.
 
Cochrane Database Systematic Reviews
, 4, CD001877.

Mayer,
T. and Cates, R.J. (
1999
).
Service excellence in health care.
 
Journal of the American Medical Association
, 282, 1281–3.

Norman,
G. (
2005
).
Research in clinical reasoning: past history and current trends.
 
Medical Education,
 39, 418–27.

Portney,
L.G. and Watkins, M.P. (
2008
).
Foundations of Clinical Research: Applications to Practice
. Upper Saddle River, NJ: Prentice Hall.

Robinson,
W. (
1974
).
Conscious competency – the mark of a competent instructor.
 
Personnel Journal,
 53, 538–9.

Sackett,
D.L., Haynes, R.B., and Tugwell, P. (
1991
).
Clinical Epidemiology: A Basic Science for Clinical Medicine
. Boston: Little, Brown.

Scally,
G. and Donaldson, L.J. (
1998
).
The NHS’s 50th anniversary. Clinical governance and the drive for quality improvement in the new NHS in England.
 
British Medical Journal,
 317, 61–5.

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close