Skip to main content
  • Research article
  • Open access
  • Published:

Revisiting methods for modeling longitudinal and survival data: Framingham Heart Study

Abstract

Background

Statistical methods for modeling longitudinal and time-to-event data has received much attention in medical research and is becoming increasingly useful. In clinical studies, such as cancer and AIDS, longitudinal biomarkers are used to monitor disease progression and to predict survival. These longitudinal measures are often missing at failure times and may be prone to measurement errors. More importantly, time-dependent survival models that include the raw longitudinal measurements may lead to biased results. In previous studies these two types of data are frequently analyzed separately where a mixed effects model is used for the longitudinal data and a survival model is applied to the event outcome.

Methods

In this paper we compare joint maximum likelihood methods, a two-step approach and a time dependent covariate method that link longitudinal data to survival data with emphasis on using longitudinal measures to predict survival. We apply a Bayesian semi-parametric joint method and maximum likelihood joint method that maximizes the joint likelihood of the time-to-event and longitudinal measures. We also implement the Two-Step approach, which estimates random effects separately, and a classic Time Dependent Covariate Model. We use simulation studies to assess bias, accuracy, and coverage probabilities for the estimates of the link parameter that connects the longitudinal measures to survival times.

Results

Simulation results demonstrate that the Two-Step approach performed best at estimating the link parameter when variability in the longitudinal measure is low but is somewhat biased downwards when the variability is high. Bayesian semi-parametric and maximum likelihood joint methods yield higher link parameter estimates with low and high variability in the longitudinal measure. The Time Dependent Covariate method resulted in consistent underestimation of the link parameter. We illustrate these methods using data from the Framingham Heart Study in which lipid measurements and Myocardial Infarction data were collected over a period of 26 years.

Conclusions

Traditional methods for modeling longitudinal and survival data, such as the time dependent covariate method, that use the observed longitudinal data, tend to provide downwardly biased estimates. The two-step approach and joint models provide better estimates, although a comparison of these methods may depend on the underlying residual variance.

Peer Review reports

Background

Statistical methods for modeling longitudinal and time-to-event data has received much attention recently in medical research and is becoming increasingly useful. A common objective in this research is to characterize the relationship between the longitudinal response and the time-to-event [1, 2]. Typical settings where these may occur include HIV studies in which baseline characteristics are recorded and immunological measures such as CD4+ lymphocyte counts or viral load are measured repeatedly to assess patients’ health until HIV conversion [3]. We consider data from the Framingham Heart Study in which lipid measurements and myocardial infarction (MI) data were collected over a period of 26 years. The Framingham Heart Study (http://www.framinghamheartstudy.org/) is a well-known longitudinal study that identifies potential risk factors for the development of cardiovascular disease. In this study high-density lipoprotein (HDL), low density lipoprotein (LDL) and triglycerides (TG) were measured at generally comparable time intervals over the 26-years. Time to MI was also recorded for each participant, although some subjects were censored by the end of the study period or due to death from other causes. We assess methods that characterize associations between the longitudinal lipid measures and time to MI with an emphasis on precise estimation of the parameters linking longitudinal risk factors with time to MI.

There is extensive literature and a wide range of statistical packages for jointly modeling longitudinal and survival data. Some recent work includes those of Brown and Ibrahim [4]; Zeng and Cai [5]; Tseng, Hsieh, and Wang [6]; Ye, Lin, and Taylor [7]; Ibrahim, Chu, and Chen [8]; Rizopoulos [9], among others. Tsiatis and Davidian [2] provide a comprehensive overview of earlier articles addressing a number of methods for jointly modeling longitudinal and survival data. These articles include work on joint models by Robins and Tsiatis [10]; DeGruttola and Tu [11]; Tsiatis, DeGruttola, and Wulfsohn [12]; LaValley and Degruttola [13]; Faucett and Thomas [14]; Wulfsohn and Tsiatis [15]; Wang and Taylor [16]; Xu and Zeger [17]. Sweeting and Thompson [18] provide a comparison of a shared random effects model with a naïve time-dependent covariate model and a two-stage joint model for modelling the association between the longitudinal process and the time-to-event outcome. They conducted a simulation study to contrast these three approaches in their ability to estimate the true association between a longitudinal process and the event hazard. In their simulations they assumed a constant baseline hazard model to simulate a relatively rare event with a modest correlation between the longitudinal and survival process [18]. Our paper builds upon the Sweeting and Thompson paper and implements a new data generation scheme, assuming a Weibull distribution for the survival process and considers a wide range of scenarios for the event rates and the residual variances.

Our main objectives in this paper are to (i) evaluate a Bayesian semi-parametric joint model (BSJM) and a maximum likelihood joint model approach (MLA) that link longitudinal trajectories to survival and (ii) compare these joint maximum likelihood methods with the Two-Step Approach (TSA), and the Cox Time Dependent Covariate Model (TDCM) [4, 12, 15, 19]. In these methods the BSJM and MLA maximize the joint likelihood of the longitudinal and survival data. In the TSA, random effects are estimated separately in the first stage and the predicted longitudinal measures are then substituted into the second stage survival analysis. The argument in favor of the joint model has been the efficient use of the data as the survival information goes into modeling the longitudinal process and vice versa. The TDCM implements time varying covariate approaches in which the observed longitudinal measures are applied in the survival model.

Methods

In this section, we describe the methods for jointly modeling longitudinal data and survival data using a joint likelihood. In such modeling, the main focus may be on the longitudinal component, the survival component, or both, depending on the objectives of the studies. When the focus is on one aspect, the other component is then secondary; so its parameters may be viewed as nuisance parameters [20]. Our goal is to characterize the relation between the time-to-event (primary) outcome and the longitudinal measures (secondary), adjusting for covariates in the model.

Joint likelihood model

We consider a joint likelihood model similar to that of Brown and Ibrahim [4], which links the longitudinal trajectories of each subject to survival data. The longitudinal responses, Yi, are linked to the time-to-event model using a Cox proportional hazard model [21]. For each individual, i, we let Si be the survival time and Ci censoring time, with Ti the observed survival time. Due to censoring we observe Ti =  min (Si, Ci). Let Î´i = I(Si ≤ Ci), denote the event indicator:

$$ {\delta}_i=\left\{\begin{array}{c}1\ if\ {S}_i\le {C}_i\\ {}0\ if\ {S}_i>{C}_i\ \end{array}\right\} $$

The joint likelihood for subject i can be constructed as a product of the longitudinal model and survival model conditional on the longitudinal measures.

$$ f\left({Y}_i,{t}_i,{\delta}_i\right)=f\left({Y}_i\right)\ast f\left({t}_i,{\delta}_i|{Y}_i\right) $$

We begin with the likelihood model for the longitudinal measures and implement a random effects approach that models the longitudinal measures with possible measurement errors. Each subject has mi measures denoted by Yij, j = 1, 2, …, mi, where Yij represents the observed outcome for the ith subject at the jth time point. We denote Yij∗ as the true unobserved measure value such that:

$$ {Y}_{ij}\left({t}_{ij}\right)={Y_i}^{\ast}\left({t}_{ij}\right)+{\epsilon}_{ij},{\epsilon}_{ij}\sim N\left(0,{\sigma}^2\right) $$
(1)

\( {Y}_{ij}^{\ast } \) is also known as the trajectory function. Throughout this paper we assume that Yij∗ = Yi∗(tij). The trajectory can be modeled in a linear form [4] or quadratic form [12], or spline, and other time series forms can be implemented to capture the trajectory of the longitudinal measures but the trade-off is the complexity and interpretation of the model. These longitudinal measures are often missing at failure times and may be prone to measurement error. Including the raw longitudinal measurements in analysis may lead to bias if the measures are related to the censoring process [1].

In this paper we use a linear mixed effects model (LME), which allows individual or subject-specific inference following the approach of Laird and Ware [22], to fit data from longitudinal response processes.

$$ E\left(\ {Y}_{ij}\right)={Y}_i^{\ast}\left({t}_{ij}\right)={U}_{i\mathbf{1}}+{U}_{i\mathbf{2}}\ast {t}_{ij} $$
(2)

Here Ui1 and Ui2 are random effects, representing the subject-specific intercepts and slopes, and are usually assumed to be multivariate normally distributed. The variable tij represents the times for the ith subject at the jth time at which the longitudinal measures are recorded during the follow-up period.

For the survival model we consider the Cox model that links the time-to-event data and the longitudinal trajectories through the hazard function. For each individual, i = 1, 2, …, n, we let Si denote the survival or event time and Ci denote the censoring time, respectively. We assume that the censoring process is random or non-informative.

For individual, ti denotes the failure time, which may be right censored. The Cox model that links the time-to-event outcome and the longitudinal trajectories through the hazard function can be written as:

$$ {h}_i(t)={h}_{\mathbf{0}}(t)\exp \left\{{Y}_i^{\ast }(t)\gamma +{X_i}^T\alpha \right\} $$
(3)

The parameter γ is a scalar (link) parameter that links the predicted longitudinal trajectories to the hazard function; α is a vector of unknown parameters for the time independent covariate measures XiT; h0(t) is the baseline hazard function. From (3) one can generate a Cox partial likelihood [21] from which statistical inferences can be derived if \( {Y}_i^{\ast } \) were observed:

$$ {L}_p^{\ast }=\prod \limits_{i=1}^n{\left\{\frac{\mathit{\exp}\left({Y}_i^{\ast}\left({t}_i\right)\gamma +{X_i}^T\alpha \right)}{\sum \limits_{k=1}^n\mathit{\exp}\left({Y}_k^{\ast}\left({t}_i\right)\gamma +{X_i}^T\alpha \right)I\left({t}_k\ge {t}_i\right)}\right\}}^{\delta_i} $$
(4)

In (4) two assumptions are made: (i) survival times are independent and without ties; (ii) the longitudinal measures are available at each event time for all individuals. Assumption (ii) is often not true as covariate measurement times may not coincide with event times, leading to missing data in time-dependent covariates in the survival model. Such missingness may be assumed to be ignorable or MAR where the missingness is not related to the missing values that would be observed [23]. The last-value-carried-forward (LVCF) method for missing longitudinal measures has been widely used to impute the missing covariates, but may lead to bias in the estimates [1].The hazard function for the survival component (3) given the longitudinal trajectory function yields:

$$ f\left({t}_i,{\delta}_i\right)\propto f{\left({t}_i\right)}^{\delta_i}S{\left({t}_i\right)}^{1-{\delta}_i}=h{\left({t}_i\right)}^{\delta_i}S\left({t}_i\right), where\ S(t)=\mathit{\exp}\left(-{\int}_0^th(u) du\right) $$
$$ f\left({t}_i,{\delta}_i|{Y_i}^{\ast },{X}_i\right)={h}_0{\left({t}_i\right)}^{\delta_i}\exp \left\{{\delta}_i\left({Y}_i^{\ast}\left({t}_i\right)\gamma +{X_i}^T\alpha \right)\right\}\ast \mathit{\exp}\left\{-{\int}_0^{t_i}{h}_0(u)\exp \left[{Y}_i^{\ast }(u)\gamma +{X_i}^T\alpha \right] du\right\} $$
(5)

Statistical inference based on the above likelihood function is potentially computationally intensive. A non-adaptive Gauss-Kronrod integration can be employed to numerically calculate the integral [24]. The maximum likelihood parameter estimates can be obtained using the Expectation–Maximization (EM) algorithm and Newton-Raphson approximation [25]. Or a Monte Carlo Markov Chain (MCMC) approach can be implemented in a Bayesian framework [14]. Tsiatis and Davidian [2] applied a different approach to the issue by proposing a conditional score that is also efficient and yields consistent and asymptotically normal estimators.

Bayesian semi-parametric joint model (BSJM)

Faucett and Thomas [14] applied a Bayesian approach to estimate the parameters of the longitudinal process (σ2, mean and covariance of Ui1 and Ui2) and the proportional hazard model (λ0, γ and α) in the joint likelihood framework via MCMC. They specified non-informative priors for the parameters to obtain results similar to the maximum likelihood estimates. Wang and Taylor [16] applied a similar approach to model the survival component and used a more flexible approach to the longitudinal part by incorporating a stochastic process into the longitudinal trajectory. Brown and Ibrahim [4] considered a semi-parametric Bayesian joint model in which they relax the distributional assumptions for the longitudinal model using Dirichlet process priors on the random effect parameters.

We apply a Bayesian approach for jointly modeling longitudinal and survival data similar to that of Brown and Ibrahim [4]. For the survival model, we specify normal priors for the parameters γ and Î±. We use MCMC methods to obtain posterior distributions of the parameters given the data. For the longitudinal component, we consider the LME model in which we assume prior distributions for the mean parameters, with variance components. From (1) and (2) we have that:

$$ {Y}_{ij}\left({t}_{ij}\right)={Y_{ij}}^{\ast}\left({t}_{ij}\right)+{\epsilon}_{ij};{Y}_{ij}^{\ast }={U}_{i1}+{U}_{i2}\ast {t}_{ij};i=1,2,\dots, n;j=1,2,\dots, {m}_i $$
$$ {\epsilon}_{ij}\sim N\left(0,{R}_i\right) $$
$$ {Y}_{ij}\mid {U}_i\sim N\left({Y}_{ij}^{\ast },{V}_i\right),{V}_i={Z}_iG{Z_i}^{\prime }+{R}_i,{R}_i={\sigma}^2{I}_{m_i} $$

We model the longitudinal trajectories using a linear growth curve model with Ui1 and Ui2 representing the random intercepts and slopes. G denotes the covariance matrix of the random effects and Zi is a diagonal matrix of the longitudinal time points. A subject i ′ s contribution to the joint likelihood function can be written as:

$$ f\left({Y}_i,{t}_i,{\delta}_i\right)={h}_{\mathbf{0}}{\left({t}_i\right)}^{\delta_i}\mathit{\exp}\left\{{\delta}_i\left({Y}_i^{\ast}\left({t}_i\right)\gamma +{X_i}^T\alpha \right)\right\}\ast \mathit{\exp}\left\{-{\int}_{\mathbf{0}}^{t_i}{h}_{\mathbf{0}}(u)\mathit{\exp}\left[{Y}_i^{\ast }(u)\gamma +{X_i}^T\alpha \right] du\right\}\ast $$
(6)
$$ \frac{1}{{\left(2\pi {\sigma}^2\right)}^{\frac{m_i}{2}}}\mathit{\exp}\left\{-\frac{1}{2{\sigma}^2}\sum \limits_{j=1}^{m_i}{\left({Y}_{ij}-{Y}_{ij}^{\ast}\right)}^2\right\} $$

A typical approach assumes a constant hazard ratio for the baseline hazard and normal prior distributions for the random effects and the unknown regression parameter (α). The variance-covariance matrix (V) is assigned a Wishart distribution through a precision matrix P:

$$ P={V}^{-1}\sim Wishart\left({Q}^{-1},v\right) $$

and

$$ {\sigma}^2\sim IG\left(a,b\right) $$

In the Wishart distribution, Q denotes the scale matrix and v denotes the degrees of freedom. IG represents inverse gamma distribution with shape a and scale parameter b.

We employed R and WinBUGS to obtain parameter estimates and credible intervals from the posterior distribution of the Bayesian modeling using standard Gibbs sampling MCMC methods [26, 27]. Once the model, data and initial values are specified in WinBUGS, the parameters can be monitored until convergence is attained. At the end of the MCMC process we obtain plots and diagnostic statistics to assess convergence in the parameters. One shortcoming of the BSJM is the computing time involved in estimating the parameters.

Maximum likelihood approach (MLA)

The maximum likelihood approach for jointly modeling longitudinal and survival data was described by Rizopoulos [9]. This method, employing the same general approach as Wulfsohn and Tsiatis [15], implements a shared parameter model for the joint modeling of longitudinal responses and time-to-event data.

The joint distribution for the longitudinal and survival model can be written as:

$$ f\left({Y}_i,{T}_i,{\delta}_i\right)=\int f\Big({Y}_i\left|{U}_i\right)\ast \left\{{h}_i{\left({T}_i|{U}_i\right)}^{\delta_i}S\left({T}_i|{U}_i\right)\right\}\ast f\left({U}_i\right)d{U}_i $$
(7)

where S(Ti| Ui) and hi(Ti| Ui) represent the survival function and the hazard function respectively conditional on the random effects, and f(Yi, Ti, δi) is the density function. The distributional form in (7) assumes that given the random effects, the longitudinal measures are independent of the time-to-event outcome and are independent of each other. The true unobserved values of the longitudinal measures \( \left({Y}_{ij}^{\ast }(t)\right) \) are associated with the event outcome (Ti) through the hazard function:

$$ {h}_i\left(t|{Y}_i(t)\right)={h}_0(t)\mathit{\exp}\left\{{Y}_{ij}^{\ast }(t)\gamma +{X_i}^T\alpha \right\} $$

Parameter estimates, from (7), can be obtained using the JM package in R. The package fits a shared parameter model for the joint modeling of normally distributed longitudinal responses and event time under the likelihood approach. The maximum likelihood estimation for joint models is based on the maximization of the log-likelihood corresponding to the joint distribution of the time-to-event and longitudinal outcomes [9]. The maximization is challenging as the integral of the survival function has no analytic solution. Following Rizopoulos, we implemented a Weibull model using Gauss-Hermite integration to approximate the integral. In the estimation process a hybrid optimization approach is employed that starts with EM and then continues with direct maximization. The procedure for the EM algorithm uses a fixed number of iterations and if convergence is not achieved it switches to a quasi-Newton algorithm until convergence is attained.

Two step approach (TSA)

In the TSA, the parameters of the longitudinal process are estimated separately, and the estimated random effects are substituted directly into the survival model. This approach was first implemented by Tsiatis [12] in which a linear mixed effect model is fit to the longitudinal measures and the fitted values are inserted into the Cox Proportional Hazard model in the second stage as time dependent covariate measures. Ye, Lin, and Taylor [7] proposed two approaches for modeling the TSA called risk set regression calibration (RRC) and ordinary regression calibration (ORC). In the first approach the LME model is fit using the observed longitudinal data only among subjects with the event. This approach can be implemented if the longitudinal trajectories of subjects who experienced the event may be different from those who did not.

In the second approach the LME model is fit using observed longitudinal data from all subjects. In the first step the longitudinal process is estimated using the LME model in (1) and (2) and estimates of the random effects are used to obtain predicted values of the longitudinal measures at event times. In the second step the predicted longitudinal measures are used in the Cox Model to estimate the hazard for the event. The variance estimates for the parameters are obtained from the observed information of the partial likelihood function. See eq. (3). Xi represents the time independent covariate measures in the model and the \( {Y}_i^{\ast }(t) \) represents the predicted values of the longitudinal measures at event time t. The link parameter (γ) in this approach can be interpreted as the association between the longitudinal measures at event time and the survival time. The estimation and inference for the hazard model (3) can be performed by using the partial likelihood theory proposed by Cox [21].

The main advantage of this approach is that it is simple and can be implemented using existing statistical packages. Tsiatis et al. [12] argue that a repeated measures random effects model for the covariate process is superior to naive methods where one maximizes the partial likelihood of the Cox using the observed covariates values. Ye, Lin, and Taylor [7] argue that there are two main disadvantages of a simple TSA; 1) it may provide biased estimates especially when the longitudinal process and the survival process are strongly associated; and 2) it does not incorporate the uncertainty of estimation in the first stage into the second stage, possibly leading to under-estimation of the standard errors. We evaluate several scenarios to assess the validity of these assumptions using simulation studies.

Time dependent covariate modeling (TDCM)

A time dependent explanatory variable is one whose value for any given subject may change over the period of time that the subject is observed [28]. The TDCM employs the observed longitudinal measures to predict an event. Therneau and Grambsch [19] considered a well-known example of TDCM using the Stanford Heart Transplant Program. Data for a subject is presented as multiple observations, each of which applies to an interval of observation. A proportional hazards model is often used to analyze covariate information that change over time. The hazard may be thought of as being proportional to the instantaneous probability of an event at a particular time [21].

We consider sample of size n, consisting of [Ti, δi, [Yij(t), 0 ≤ t ≤ Ti], i = 1, 2, …, n], where Ti is the time-to-event for the ith subject, δi is the event indicator. The vector Yij(t) = [Yi1(t), Yi2(t), …, Yig(t)]T is a set of observed longitudinal measures, and mi ≤ g is the number of times intervals for the ith subject and g is the maximum.

$$ {\delta}_i=\left\{\begin{array}{c}1\ if\ {T}_i\le {C}_i\\ {}0\ if\ {T}_i>{C}_i\ \end{array}\right\} $$

The hazard for this model at time t can be written as:

$$ h(t)={h}_0(t)\ast \mathit{\exp}\left(\beta {Y}_i(t)+{X_i}^T\alpha \right)={h}_0(t)\ast \mathit{\exp}\left(\sum \limits_{k=1}^{m_i}{\beta}_k{Y}_{ik}(t)+{X_i}^T\alpha \right) $$
(8)

The vector \( {Y}_{ij}(t)={\left[{Y}_{i1}(t),\dots, {Y}_{i{m}_i}(t)\right]}^T \) is a set of covariates and mi is the number of longitudinal measures for the ith subject. We define t1 < t2 < t3 < … < tD as a set of ordered event times and Yij(ti) as the time-dependent covariate associated with the individual whose failure time is ti. The risk set R(ti) at time ti is the set of all individuals who are still under study at a time just prior to ti. The partial likelihood based on the hazard function specified (9) can be written as:

$$ L\left(\alpha, \beta \right)=\prod \limits_{i=1}^D\left\{\frac{\mathit{\exp}\left(\sum \limits_{k=1}^{m_i}{\beta}_k{Y}_{ik}\left({t}_i\right)+{X_i}^T\alpha \right)}{\sum \limits_{l\epsilon R\left({t}_i\right)}\mathit{\exp}\left[\sum \limits_{k=1}^{m_i}{\beta}_k{Y}_{l k}\left({t}_i\right)+{X_l}^T\alpha \right]}\right\} $$
(9)

In most applications, Î²k = 0 for intervals which are not under consideration. The estimates can be obtained by maximizing the likelihood specified in (9).

Simulations

In this section, we carried out a series of simulations to compare the performance of these four methods for modeling longitudinal and survival data described above. The performance of these methods was assessed with Type I error, bias, accuracy, and coverage probabilities for the link parameter. We simulated longitudinal and survival data to resemble data from the Framingham Heart Study. In Table 1, we highlight the simulation model and the required parameters (residual error, random effect means for the longitudinal process, covariance of random effects and coefficients for Age and Sex) for simulating the longitudinal and survival data. The longitudinal trajectories were generated from a linear model adjusting for the age at baseline of the participants. The survival time was generated to depend on the longitudinal measures and a set of covariates (Age at baseline and Sex).

Table 1 Simulation Model and Parameters

In our previous paper on Generating Survival Times with Time-Varying Covariates [29] we provide a detailed algorithm for generating survival times for time-varying Cox Exponential and Weibull models using Lambert’s W function. The simulation requires the specification of the longitudinal measures and the distribution of the survival data. The longitudinal measures can be obtained from a mixed effects model by introducing the random effects as shown in (1) and (2). We focus on a simple linear mixed effect model (LME), which allows individual or subject-specific inference [19] to fit data from longitudinal response process. For the survival data, two independent Weibull distributions were simulated; (i) the survival times that would be observed if the follow-up had been sufficiently long to reach the event and (ii) the censoring mechanism. The survival distribution was generated to depend on the longitudinal measures and a set of covariates in accordance with the model in (5). If the survival time is less than or equal to the censored time, then the event is considered to be observed and the time-to-event equals the survival time; otherwise the event is considered censored and the time-to-event equals the censored time [30]. We assume random non-informative right censoring and employ a uniform distribution for censoring that allows a maximum follow-up time of 30 years. We use the Weibull distribution to generate the survival data. We simulated 1000 independent multivariate datasets consisting of longitudinal measures, time-to-event outcomes, and additional covariates. We present a general formula (see Table 1) which links the survival time of the Cox model and the random effects of the longitudinal model. We use the Weibull distribution to generate survival times from the longitudinal data in the simulation studies by using the Lambert function [29].

We applied the following methods to each of 1000 replicates (10,000 for Type I error): (1) Bayesian semi-parametric joint model (BSJM); (2) Maximum likelihood approach (MLA); (3) Two-step approach (TSA); (4) Time dependent covariate model (TDCM). In the BSJM method several criteria were considered in the MCMC runs including: 1) the number of chains for each run, 2) correlation between successive draws and 3) the length of burn-in time. A total of 101,000 iterations were run with a thinning of 50 and a burn-in of 1000 for 4 chains each, thus providing a sample of 500 iterations per chain. Empirical means and standard deviations for each variable were estimated. The quantiles for each variable are estimated in WinBUGS and used to compute credible intervals. In the Supplemental Material (S5), we present the Bayesian Semi-Parametric Joint Modeling, the initial values as well as the exact prior distributions implemented in the simulations. Two shortcomings of the BSJM are the computing time involved in estimating the parameters and the lack of convergence in some of the MCMC runs.

The variable Sex is considered a fixed covariate at each exam in all the methods. The baseline Age is also included in the model; the data structure is a single row per subject where longitudinal measures, covariates and the overall survival/censoring time are specified for each subject. The statistical analyses were performed using SAS Software (version 9.3; SAS Institute, Cary, NC) and the computing environment R (R Development Core Team, 2012). The Bayesian analysis was conducted in R using WinBUGS version 1.4.3, MRC Biostatistics Unit, Cambridge, UK.

Results

We compute Type I error for the link parameter to assess all four methods (see Table 2) for a sample size of 100 with 10,000 replicates. The methods appear to provide Type I error rates close to the nominal level (0.050) for both the Exponential and Weibull models with two exceptions. The BSJM shows deflated type I error for both Exponential and Weibull with high censoring rates and elevated Type I error for Weibull with low censoring. The MLA shows elevated Type I errors for Exponential with high censoring (See Supplemental Figure S1).

Table 2 Type I Error (N = 100, Replicates = 10,000, Link = 0.000)

In Table 3 we present the estimates, SEs, coverage probability (CP), bias and mean square error (MSE) for the comparison of the longitudinal effect on survival using the Weibull distribution for n = 100. The MLA and BSJM provide lower standard errors and shorter confidence intervals for the link estimate (see Table 3 and Supplemental Figure S2) compared to the other methods. The simulation scenarios with higher censoring rates show higher standard errors and larger confidence intervals. The results suggest the TSA performs best at estimating the link parameter, as it has lower bias and higher coverage probability (CP) compared to the other methods. For example, with 10% censoring and (γ = 0.00), the TSA has a bias of 0.000 with a CP value of 95.2%. The TSA provides larger standard errors for the link estimate with larger confidence intervals and CP values close to the nominal level of 95%. The TDCM yields a negative bias with low CP values when there is a strong effect of the longitudinal measure on the outcome.

Table 3 Comparison of Longitudinal Effect on Survival (Distribution = Weibull, N = 100, Link = γ)

The performance of the methods in estimating the simulated effects of Age (α1 = 0.050) and Sex (α2 =  − 0.500) was also assessed (See Supplemental Figure S3). As the effect of the longitudinal measure on survival becomes stronger, the effect of Age becomes weaker for all the methods except the TSA. The SE’s, CP and confidence intervals tend to be smaller when n = 1000, as expected. The results for the sex effect show that all methods provide precise estimates for all the scenarios considered (See Supplemental Figure S4). From the simulation results we see that the standard errors and confidence intervals become larger with higher censoring rates. The sex effect is fairly consistent among the different methods for all simulation scenarios considered.

We also implemented a data generation scheme by Rizopoulos to confirm our results across different settings. We compared the methods using FHS parameters specified in Table 1. The BSJM was not included in this setting due to the computing time involved in estimating the parameters. We used a sample size of 1000 with censoring set at 90% and a link parameter of 0.500. As shown in Table 4, with a residual variance of the longitudinal trajectories (σ2 = 0.1161), the TSA showed low bias of 0.006 with higher coverage probability 0.952 compared to the other methods. The TSA also provided larger standard errors for the link estimate with larger confidence intervals. The TDCM provided a negative bias of − 0.137 with low CP values of 0.870. The MLA provided the highest estimate in the link parameter with a positive bias of 0.082 and a coverage probability of 0.882. When we applied the above residual variance (σ2 = 0.1161), the pattern in the results provided similar findings as our data generation scheme. When the residual error was increased (σ2 = 0.396), to reflect the errors of the joint model by Wulfsohn and Tsiatis [15], we observed a similar trend in the results with the TSA providing the least biased estimate, and nearly 95% coverage, but larger SE’s (see Table 4). The link parameter estimates (0.191) for the TDCM were highly attenuated in this scenario. With a larger residual variance, the TSA showed modest bias (0.452) in the link parameter compared to the scenario with lower residual variance. As seen earlier, the MLA provided larger estimates in the link parameter (0.580) compared to the other methods.

Table 4 Rizopoulos Data Generation Scheme (N = 1000, Link = 0.500, Slope = 0.250)

Application to Framingham HEART study

To illustrate the performance of these methods, we examined data from the Framingham Heart Study (FHS) in which lipid measurements and Myocardial Infarction (MI) data were collected over a period of 26 years. The FHS is a widely known longitudinal study that seeks to identify common factors contributing to cardiovascular disease (CVD). Since 1948 three generations of participants have been recruited and followed over the years: Original cohort (recruited in 1948), Offspring (recruited in 1971) and third generation (recruited in 2002). Among the offspring participants, high-density lipoprotein (HDL), low density lipoprotein (LDL) and triglycerides (TG) were measured at fairly similar time intervals over a period of 26 years. The time to myocardial infarction was recorded for each participant, although some subjects were censored by the end of the study period or due to death from other causes. We log transformed the TG measures in our analysis to reduce skewness in TG measures. A total of 2262 subjects with complete data, until event or death, were followed from 1979 to 2005 and data was collected at the start of each exam (see Table 5). The mean age at baseline was 43.3 years. Six exams were considered in the analysis from Exam 2 to Exam 7. The mean triglyceride measures at each exam were calculated with values ranging from 100.49 to 158.70, showing an increased trend from Exam 2 (1979–1983) to Exam 7 (1998–2002). The mean (SD) follow-up time was 22.80 (5.13). The cumulative event rate for the 26-year period was 3.71%. The proportion of female participants was 51%.

Table 5 Framingham Heart Study Data (N = 2262)

We fit the FHS data to the models described in Section 2 and characterize the association between the longitudinal measures and time-to-event response. We used log TG at each exam for the longitudinal part of the model assuming a linear trend and survival time measured from exam 2 to MI or loss to follow up (up to 2005). We adjusted for Sex and baseline Age in all the models. In Fig. 1 we show the distribution of time-to-event among the 2262 subjects with complete data. The survival distribution among subjects with events was fairly uniform and the distribution of survival times for censored subjects was skewed to the left with most censoring times occurring at the tail end of the distribution (20–26 years).

Fig. 1
figure 1

Survival Distribution by Event Occurrence

In Table 6 we present the estimates for Age, Sex, and the link parameter (γ), for each method. The link parameter provides a measure of the association between the longitudinal TG measures and the risk of MI; γ is the log hazard ratio for a one unit increase in the log TG at time t in the survival model. The link parameter refers to the longitudinal levels at time t and does not relate to changes in the longitudinal model over time. The results suggest a higher estimate in the link parameter for the MLA and BSJM (γ= 0.9764 and 1.0263). These results are similar to the findings by Wulfsohn and Tsiatis [15] with higher estimates in the joint maximization compared to the two-step approach. The TDCM method shows a lower link estimate compared to the TSA and the joint likelihood methods. The TSA and the MLA provided higher standard errors for the link parameter compared to the other methods. Tsiatis [12] argued that the standard error for the link parameter is greater when using the joint estimation procedure compared to the TSA because the random effects are assumed to be influenced by the uncertainty in the estimated growth curve parameters; thus, more variability is incorporated. We do not see this, however, in these results. The Age effects and standard errors were similar among the methods with estimates ranging from 0.050 to 0.065. The Sex effect was fairly consistent among the different methods ranging from − 1.025 to − 0.999. Using a 0.05 level of significance the Age, Sex and Log of the triglyceride measures were significantly associated with risk of myocardial infarction. The results of all methods suggest that repeated measures of triglyceride levels are significantly associated with the risk of myocardial infarction in the Framingham Heart Study Cohort.

Table 6 Jointly Modeling Longitudinal and Survival Data (FHS Data)

Discussion

In this paper, we compared longitudinal and survival data methods that link longitudinal trajectories to survival. These methods quantify the link parameter as the association between the current level of a longitudinal process and the survival outcome. We analyzed data from FHS in which triglyceride measurements and Myocardial Infarction (MI) data were collected over a period of 26 years. We used a simulation study to assess the performance of these methods with respect to bias, accuracy, and coverage probabilities. We compared the TSA to the MLA, BSJM and TDCM methods using a derivation of the survival time function for modeling time dependent covariate data.

Based on the simulation studies the TSA, which uses the predicted longitudinal measures, performed best at estimating the link parameter for moderate residual variance. The joint likelihood methods provided upwardly biased estimates in the link parameter, similar to the findings by Wulfsohn and Tsiatis [15]. The TDCM that uses the observed longitudinal measures as time-dependent covariate measures in the survival analysis resulted in underestimation of the true parameters. These results are similar to the findings by Sweeting and Thompson [18]. They recommended the use of a shared random effects model. In most of our models the age effect was attenuated depending on the association of the longitudinal measures on survival. This result was expected as age was associated with the longitudinal measures. The time independent covariates (baseline Age and Sex) were unbiased when there was no association between the longitudinal measures and survival (γ = 0.00). Comparison of the methods in Framingham Heart Study revealed similar patterns.

We implemented a data generation scheme by Rizopoulos in order to confirm our results across the different settings. The results showed similar findings with the two-step approach performing best at estimating the link parameter connecting the longitudinal measures to the event time. For larger residual errors, we found that the TDCM methods had attenuated estimates of the link parameter and the MLA provided upwardly biased estimates. The TSA also yielded biased estimates in the link parameter when there are larger residual errors. These results were similar to the findings by Dafni and Tsiatis [31]. They used different values for the residual error (σ2 = 0.32, 0.62, 1.24). In their simulations with larger measurement error, the bias of the estimates based on the observed longitudinal measures increased dramatically in the positive direction. They indicated that the two-step model yielded parameter estimates that were somewhat biased towards the null (for larger residual errors). We explored several other scenarios not presented in this paper where we varied the residual error in the longitudinal measures. Our results show that with low residual errors in the longitudinal measures, the TSA provides results similar to time dependent covariate methods that use the observed longitudinal measures. We interpreted this to mean that a small residual error results in low measurement error and the observed values are comparable to the predicted values.

One limitation of our study is the linearity assumption in the longitudinal measures. The trajectory can be modeled in a linear form [4] or quadratic form [12]. Splines and other time series forms can also be implemented to better capture the trajectory of the longitudinal measures, but the trade-off is the complexity of the model and interpretability. Further, the creation of the simulated data depends on the linear model for the longitudinal data. Exploration of non-linear models for the longitudinal data is a topic of future research. The simulation data generation scheme was also based on the two-step approach; this may provide more precise estimates when analyzing the simulated data using the two-step model. Another limitation is the number of time points used for the longitudinal measures. With more exam visit time-points the LME model becomes computationally intensive. In addition, the use of distributions other than the Exponential and Weibull is indispensable in investigating the characteristics of the Cox proportional hazard model. There is the need for the use of empirical distributions to handle flexibly parameterized proportional hazard models [32]. Despite these limitations this paper strengthens the current knowledge on methods for jointly modeling longitudinal and survival data.

Conclusion

Traditional methods, such as TDCM that use observed data, tend to provide downwardly biased estimates towards the null. The TSA and joint models provide better estimates, although a comparison of these methods may depend on the underlying residual variance. Hence, an avenue for future exploration is to evaluate the degree of attenuation in the link parameter by the magnitude of the residual variance. Joint modeling for longitudinal and survival data has also recently received attention in statistical genetics research. In genome wide association studies and gene expression studies, longitudinal and survival measures are often collected over time. The development of new methods to handle these high dimensional data is essential. Binary longitudinal measures, multiple survival endpoints and missing data analyses are also important areas for further investigation using the methods for jointly modeling longitudinal and survival data.

Availability of data and materials

The data used in this study are available at NIH BioLINCC (https://biolincc.nhlbi.nih.gov/home/). They can also be provided to interested researchers on written request to FHS. Request for FHS data may be done by submitting a proposal through the FHS web-based research application. A catalogue of the FHS data repository may be accessed through the FHS website: www.framinghamheartstudy.org/researchers/description-data/

Abbreviations

BSJM:

Bayesian Semi-Parametric Joint Modeling

CP:

Coverage Probability

LME:

Linear Mixed Effects

MLA:

Maximum Likelihood Approach

TDCM:

Time Dependent Covariate Modeling

TSA:

Two Step Approach

MSE:

Mean Square Error

References

  1. Prentice R. Covariate measurement errors and parameter estimation in a failure time regression model. Biometrika. 1982;69:331–42.

    Article  Google Scholar 

  2. Tsiatis A, Davidian M. Joint modeling of longitudinal and time-to-event data: an overview. Stat Sin. 2004;14(3):809-34.

  3. Ngwa JS, Cabral HJ, Cheng DM, et al. A comparison of time dependent Cox regression, pooled logistic regression and cross sectional pooling with simulations and an application to the Framingham heart study. BMC Med Res Methodol. 2016;16:148.

    Article  Google Scholar 

  4. Brown ER, Ibrahim JG. A Bayesian semiparametric joint hierarchical model for longitudinal and survival data. Biometrics. 2003;59:221–8.

    Article  Google Scholar 

  5. Zeng D, Cai J. Simultaneous modelling of survival and longitudinal data with an application to repeated quality of life measures. Lifetime Data Anal. 2005;11:151–74.

    Article  Google Scholar 

  6. Tseng Y-K, Hsieh F, Wang J-L. Joint modelling of accelerated failure time and longitudinal data. Biometrika. 2005;92:587–603.

    Article  Google Scholar 

  7. Ye W, Lin X, Taylor JM. Semiparametric modeling of longitudinal measurements and time-to-event data–a two-stage regression calibration approach. Biometrics. 2008;64:1238–46.

    Article  Google Scholar 

  8. Ibrahim JG, Chu H, Chen LM. Basic concepts and methods for joint models of longitudinal and survival data. J Clin Oncol. 2010;28:2796–801.

    Article  Google Scholar 

  9. Rizopoulos DJM. An R package for the joint modelling of longitudinal and time-to-event data. Journal of Statistical Software (Online). 2010;35:1–33.

    Google Scholar 

  10. Robins J, Tsiatis AA. Semiparametric estimation of an accelerated failure time model with time-dependent covariates. Biometrika. 1992;79:311–9.

    Google Scholar 

  11. De De Gruttola V, Tu XM. Modelling progression of CD4-lymphocyte count and its relationship to survival time. Biometrics. 1994;50(4):1003-14.

  12. Tsiatis A, Degruttola V, Wulfsohn M. Modeling the relationship of survival to longitudinal data measured with error. Applications to survival and CD4 counts in patients with AIDS. J Am Stat Assoc. 1995;90:27–37.

    Article  Google Scholar 

  13. Lavalley MP, Degruttola V. Models for empirical Bayes estimators of longitudinal CD4 counts. Stat Med. 1996;15:2289–305.

    Article  CAS  Google Scholar 

  14. Faucett CL, Thomas DC. Simultaneously modelling censored survival data and repeatedly measured covariates: a Gibbs sampling approach. Stat Med. 1996;15:1663–85.

    Article  CAS  Google Scholar 

  15. Wulfsohn MS, Tsiatis AA. A joint model for survival and longitudinal data measured with error. Biometrics. 1997;53(1):330-9.

  16. Wang Y, Taylor JMG. Jointly modeling longitudinal and event time data with application to acquired immunodeficiency syndrome. J Am Stat Assoc. 2001;96:895–905.

    Article  Google Scholar 

  17. Xu J, Zeger SL. Joint analysis of longitudinal data comprising repeated measures and times to events. J R Stat Soc Ser C (Applied Statistics). 2001;50:375–87.

    Article  CAS  Google Scholar 

  18. Sweeting MJ, Thompson SG. Joint modelling of longitudinal and time-to-event data with application to predicting abdominal aortic aneurysm growth and rupture. Biom J. 2011;53:750–63.

    Article  Google Scholar 

  19. Therneau TM and Grambsch PM. Modeling survival data: extending the Cox model. Springer Science & Business Media, 2013.

    Google Scholar 

  20. Wu L. Mixed Effects Models for Complex Data (1st ed.). New York: Chapman and Hall/CRC; 2009. p 431. https://0-doi-org.brum.beds.ac.uk/10.1201/9781420074086.

  21. Cox DR. Models and life-tables regression. JR Stat Soc Ser B. 1972;34:187–220.

    Google Scholar 

  22. Laird NM, Ware JH. Random-effects models for longitudinal data. Biometrics. 1982;38(4):963-74.

  23. Little RJA, Rubin DB. Statistical analysis with missing data. Vol. 793. Wiley; 2019.

  24. Piessens R, de Doncker-Kapenga E, Ãœberhuber CW and Kahaner DK. QUADPACK: a subroutine package for automatic integration. Springer Science & Business Media, 2012.

    Google Scholar 

  25. Herzet C, Wautelet X, Ramon V, Vandendorpe L. Iterative synchronization: EM algorithm versus Newton-Raphson method. In: Acoustics, Speech and Signal Processing, 2006 ICASSP 2006 Proceedings 2006 IEEE International Conference on: IEEE; 2006. p. IV.

  26. Venables WN, Smith DM. The R development core team. Vienna, Austria: An Introduction to R R Foundation for Statistical Computing; 2006.

    Google Scholar 

  27. Lunn DJ, Thomas A, Best N, Spiegelhalter D. WinBUGS-a Bayesian modelling framework: concepts, structure, and extensibility. Stat Comput. 2000;10:325–37.

    Article  Google Scholar 

  28. Cox DR, Oakes D. Analysis of survival data. Vol. 21. Germany: Taylor & Francis; 1984.

  29. Ngwa JS, Cabral HJ, Cheng DM, Gagnon DR, LaValley MP, Cupples LA. Generating survival times with time-varying covariates using the Lambert W function. Commun Stat Simul Comput. 2019;2019. https://0-doi-org.brum.beds.ac.uk/10.1080/03610918.2019.1648822.

  30. Burton A, Altman DG, Royston P, Holder RL. The design of simulation studies in medical statistics. Stat Med. 2006;25:4279–92.

    Article  Google Scholar 

  31. Dafni UG, Tsiatis AA. Evaluating surrogate markers of clinical outcome when measured with error. Biometrics. 1998;54(4):1445-62.

  32. Bender R, Augustin T, Blettner M. Generating survival times to simulate Cox proportional hazards models. Stat Med. 2005;24:1713–23.

    Article  Google Scholar 

Download references

Acknowledgements

We thank Dr. Timothy Heeren and Dr. Michael Pencina for their input in the simulation analysis.

Funding

This work was supported by the National Heart, Lung and Blood Institute’s Framingham Heart Study (Contracts No. N01-HC-25195 and HHSN268201500001I) of the National Institutes of Health and Boston University School of Medicine. A portion of this research was conducted using Boston University Linux Cluster for Genetic Analysis (LinGA) funded by the National Center for Research Resources (NIH NCRR). The funders had no role in the design, data collection, and interpretation of this study, and the content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

Authors and Affiliations

Authors

Contributions

JSN and LAC conceived the study. The data were cleaned by JSN and LAC. JSN, HJC, DMC, DRG, MPL and LAC authors contributed towards the design of the simulation scenarios implemented in the study. The analysis was carried out by JSN. The manuscript was written by JSN. JSN, HJC, DMC, DRG, MPL and LAC contributed toward reviewing and revising the final manuscript. All authors participated in the review and approval of the final manuscript.

Corresponding authors

Correspondence to Julius S. Ngwa or L. Adrienne Cupples.

Ethics declarations

Ethics approval and consent to participate

The research protocols of the FHS are reviewed annually by the Institutional Review Board of the Boston University Medical Center and by the Observational Studies Monitoring Board of the National Heart, Lung and Blood Institute. Since 1971, written consent has been obtained from participants before each examination. Information about the content of the Framingham Heart Study research examinations is presented to the participants at each examination cycle in the text of the corresponding consent form and in a discussion with a trained admitting coordinator at the beginning of the scheduled appointment. Information from every completed consent form is coded and recorded in a database. Questions regarding the ethical conduct of research are presented by FHS investigators to the FHS Ethics Advisory Board. The Advisory Board reviews, discusses, and make recommendations regarding these questions.

Consent for publication

Manuscript does not contain any individual person’s data.

Competing interests

There were no competing interests in the conduct of this study.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: S1:

Type I Errors for Link (Exponential and Weibull Distribution, N = 100). S2: Estimates and Confidence Intervals for Link (Weibull Distribution, N = 100). S3: Estimates and Confidence Intervals for Age (Weibull Distribution, N = 100). S4: Estimates and Confidence Intervals for Sex (Weibull Distribution, N = 100). S5: Bayesian Semi-Parametric Joint Modeling Exact Prior Distributions

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ngwa, J.S., Cabral, H.J., Cheng, D.M. et al. Revisiting methods for modeling longitudinal and survival data: Framingham Heart Study. BMC Med Res Methodol 21, 29 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-021-01207-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-021-01207-y

Keywords