Skip to main content
Log in

Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

  • Published:
Lifetime Data Analysis Aims and scope Submit manuscript

Abstract

Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums of residuals, which validate the model in three aspects: (1) proportionality of hazard ratio, (2) the linear functional form and (3) the link function. For each assumption testing, we provide a \(p\)-values and a visualized plot against the null hypothesis using a simulation-based approach. We also consider an omnibus test for overall evaluation against any model misspecification. The proposed tests perform well in simulation studies and are illustrated with two real data examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Andersen PK, Borgan Ø, Gill RD, Keiding N (1993) Statistical models based on counting processes. Springer-Verlag, New York

    Book  MATH  Google Scholar 

  • Dickson E, Grambsch P, Fleming T, Fisher LD, Langworthy A (1989) Prognosis in primary biliary cirrhosis: model for decision making. Hepatology 10:1–7

    Article  Google Scholar 

  • Fine JP, Gray RJ (1999) A proportional hazards model for the subdistribution of a competing risk. J Am Stat Assoc 94:496–509

    Article  MATH  MathSciNet  Google Scholar 

  • Fleming TR, Harrington DP (1991) Counting processes and survival analysis. Wiley, New York

    MATH  Google Scholar 

  • Kim H (2007) Cumulative incidence in competing risks data and competing risks regression analysis. Clin Cancer Res 13:559–565

    Article  Google Scholar 

  • Lau B, Cole S, Gange S (2009) Competing risk regression models for epidemiologic data. Am J Epidemiol 170:244–256

    Article  Google Scholar 

  • Lin DY, Wei LJ, Ying Z (1993) Checking the Cox model with cumulative sums of martingale-based residuals. Biometrika 80:557–572

    Article  MATH  MathSciNet  Google Scholar 

  • Perme M, Andersen P (2008) Checking hazard regression models using pseudo-observations. Stat Med 27(25):5309–5328

    Article  MathSciNet  Google Scholar 

  • Scheike TH, Zhang MJ (2008) Flexible competing risks regression modelling and goodness-of-fit. Lifetime Data Anal 14:464–483 pMCID: PMC2715961

    Article  MATH  MathSciNet  Google Scholar 

  • Scheike TH, Zhang MJ, Gerds T (2008) Predicting cumulative incidence probability by direct binomial regression. Biometrika 95:205–220 pMC Journal-In Process

    Article  MATH  MathSciNet  Google Scholar 

  • Scrucca L, Santucci A, Aversa F (2007) Competing risk analysis using R: an easy guide for clinicians. Bone Marrow Transpl 40(4):381–387

    Article  Google Scholar 

  • Weisdorf D, Eapen M, Ruggeri A, Zhang M, Zhong X, Brunstein C, Ustun C, Rocha V, Gluckman E (2013) Alternative donor hematopoietic transplantation for patients older than 50 years with aml in first complete remission: unrelated donor and umbilical cord blood transplantation outcomes. Blood 122:302

    Article  Google Scholar 

  • Wolbers M, Koller M, Witteman J, Steyerberg E (2009) Prognostic models with competing risks: methods and application to coronary risk prediction. Epidemiology 20(4):555–561

    Article  Google Scholar 

  • Zhou B, Fine J, Laird G (2013) Goodness-of-fit test for proportional subdistribution hazards model. Stat Med 32(22):3804–3011

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mei-Jie Zhang.

Appendix

Appendix

Consider the following partial sums of residuals

$$\begin{aligned} B(t,x) = \sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p)w_i(u,\hat{G}_c)d\hat{M}^1_i(u), \end{aligned}$$

where \(f(x,\varvec{Z}_i,\varvec{v},p)=(\varvec{v}^\mathsf T \varvec{Z}_i)^p 1\!\!1\left( \varvec{v}^\mathsf T \varvec{Z}_i\le x\right) \). Here \(\varvec{v}\) is a vector with same dimension of covariates \(\varvec{Z}\), and \(p=0\) or \(1\). Under the null hypothesis that the FG model is valid,

$$\begin{aligned}&B(t,x)=\sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p)w_i(u,\hat{G}_c)\left\{ dN^1_i(u) -Y^1_i(u)\exp (\hat{\varvec{\beta }}^\mathsf T \varvec{Z}_i) d\hat{\varLambda }_{10}^*(u) \right\} \nonumber \\&=\sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p) w_i(u,\hat{G}_c)dM^1_i(u) \nonumber \\&\ \!-\!\sum _{i=1}^{n}\!\int _0^tf(x,\varvec{Z}_i,\varvec{v},p) w_i(u,\hat{G}_c)Y^1_i(u)\left\{ \exp (\hat{\varvec{\beta }}^\mathsf T \varvec{Z}_i) d\hat{\varLambda }_{10}^*(u)\!-\!\exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_i)d\varLambda _{10}^*(u)\right\} \nonumber \\&= \sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p) w_i(u,\hat{G}_c)dM^1_i(u) \end{aligned}$$
(7)
$$\begin{aligned}&\ -\sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p) w_i(u,\hat{G}_c)Y^1_i(u)\exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_i) \varvec{Z}_i^\mathsf T d\hat{\varLambda }_{10}^*(u) \left( \hat{\varvec{\beta }} -\varvec{\beta }_0\right) \end{aligned}$$
(8)
$$\begin{aligned}&\ -\sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p) w_i(u,\hat{G}_c)Y^1_i(u)\exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_i) \left\{ d\hat{\varLambda }_{10}^*(u) - d\varLambda _{10}^*(u)\right\} \\&\ + o_p(1). \nonumber \end{aligned}$$
(9)

Taking the Taylor expansion of \(\varvec{U}(\hat{\varvec{\beta }},t)\) at \(\varvec{\beta }_0\), we can obtain

$$\begin{aligned} n^{1/2}(\hat{\varvec{\beta }} -\varvec{\beta }_0) = \varvec{\varOmega }^{-1}\{n^{-1/2}\varvec{U}(\varvec{\beta }_0)\}+o_p(1), \end{aligned}$$

where \(\varvec{\varOmega }=\lim _{n\rightarrow \infty }\varvec{I}(\varvec{\beta }_0)/n\) and asymptotically \(\varvec{U}(\varvec{\beta }_0)\) can be expressed as the sum of \(n\) independent and identically distributed random variables, i.e. \(n^{-1/2}\varvec{U}(\varvec{\beta }_0)=n^{-1/2}\sum _{i=1}^n(\varvec{\eta _i}+\varvec{\psi _i})\,{+}\,o_p(1)\) (for explicit expressions of \(\varvec{\eta _i}\) and \(\varvec{\psi _i}\) see Fine and Gray (1999)). Since \(\varvec{\eta _i}\) contributes the majority of the variability, so we call \(\varvec{\eta _i}\) major term and \(\varvec{\psi _i}\) minor term. Both \(\varvec{\eta _i}\) and \(\varvec{\psi _i}\) are zero-mean Gaussian processes. Therefore

$$\begin{aligned} n^{1/2}(\hat{\varvec{\beta }} -\varvec{\beta }_0)&= \varvec{\varOmega }^{-1}\left\{ n^{-1/2}\sum _{i=1}^n(\varvec{\eta _i}+\varvec{\psi _i})\right\} +o_p(1). \end{aligned}$$
(10)

Recall that

$$\begin{aligned} \hat{\varLambda }^{*}_{10}(t)=\sum _{i=1}^n \int _0^{t}\frac{w_i(u,\hat{G}_c)dN^1_i(u)}{S_0(\hat{\varvec{\beta }},u)}, \end{aligned}$$

therefore

$$\begin{aligned} n^{1/2}\left\{ \hat{\varLambda }_{10}^*(t)-\varLambda _{10}^*(t)\right\}&= n^{1/2}\sum _{i=1}^n\int _0^t\left\{ \frac{1}{S_0(\hat{\varvec{\beta }},u)}-\frac{1}{S_0(\varvec{\beta }_0,u)}\right\} w_i(u,\hat{G}_c)dN^1_i(u) \end{aligned}$$
(11)
$$\begin{aligned}&+n^{1/2}\left\{ \sum _{i=1}^n\int _0^t\frac{1}{S_0(\varvec{\beta }_0,u)}w_i(u,\hat{G}_c)dN^1_i(u)-\varLambda _{10}^*(t)\right\} . \end{aligned}$$
(12)

Further more, for (11), taking the Taylor expansion of \(1/S_0(\hat{\varvec{\beta }},u)\) at \(\varvec{\beta }_0\), we can obtain

$$\begin{aligned} (11)=&-\sum _{i=1}^n\int _0^t\frac{\varvec{S}_1^\mathsf T (\varvec{\beta }_0,u)}{\{S_0(\varvec{\beta }_0,u)\}^2}w_i(u,\hat{G}_c)dN^1_i(u)n^{1/2}(\hat{\varvec{\beta }} -\varvec{\beta }_0)+o_p(1). \end{aligned}$$
(13)

For (12),

$$\begin{aligned} (12)&= n^{1/2}\sum _{i=1}^n\int _0^t\frac{w_i(u,\hat{G}_c)dM^1_i(u)}{S_0(\varvec{\beta }_0,u)}\\&=n^{1/2}\sum _{i=1}^n\!\int _0^t\frac{w_i(u,G_c)dM^1_i(u)}{S_0(\varvec{\beta }_0,u)} \!+\!n^{1/2}\!\!\sum _{i=1}^n\!\int _0^t\frac{\{w_i(u,\hat{G}_c)\!-\!w_i(u,G_c)\}dM^1_i(u)}{S_0(\varvec{\beta }_0,u)}\\&=n^{1/2}\sum _{i=1}^n\int _0^t\frac{w_i(u,G_c)dM^1_i(u)}{S_0(\varvec{\beta }_0,u)}\\&\quad -\,n^{1/2} \sum _{i=1}^n\int _0^{\infty }\frac{Q(u,t)}{\sum _{l=1}^n 1\!\!1(X_i\ge u)}dM_i^c(u)+o_p(1), \end{aligned}$$

where

$$\begin{aligned} Q(u,t)&= \sum _{j=1}^n\int _0^{t}\frac{1\!\!1(X_j\le u\le s)}{S_0(\varvec{\beta }_0,s)}w_i(s,\hat{G}_c)dM^1_i(s),\\ dM_i^c(u)&= 1\!\!1(X_i\le t, \Delta _i=0)-\int _0^t1\!\!1(X_i\ge u)d\hat{\varLambda }^c(u),\\ \hat{\varLambda }^c(u)&= \int _0^t\frac{d\sum _{i=1}^n1\!\!1(X_i\le t, \Delta _i=0)}{\sum _{i=1}^n1\!\!1(X_i\ge u)}. \end{aligned}$$

Plug (11) and (12) into (9) we have

$$\begin{aligned}&(9) = \nonumber \\&-\sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p) \frac{w_i(u,\hat{G}_c)Y^1_i(u) \exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_i)}{ S_0(\varvec{\beta }_0,u)} \sum _{l=1}^nw_l(u,\hat{G}_c)dM^1_l(u) \end{aligned}$$
(14)
$$\begin{aligned}&+\sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p) w_i(u,\hat{G}_c) Y^1_i(u)\exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_i) \frac{\varvec{S}_1^\mathsf T (\varvec{\beta }_0,u)}{S_0(\varvec{\beta }_0,u)} d\varLambda _{10}^*(u) \left( \hat{\varvec{\beta }}-\varvec{\beta }_0\right) \end{aligned}$$
(15)
$$\begin{aligned}&-\sum _{i=1}^{n}\int _0^{\infty } f(x,\varvec{Z}_i,\varvec{v},p) w_i(u,\hat{G}_c) Y^1_i(u)\exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_i)\frac{\sum _{j=1}^nQ(u,t)dM_j^c(u)}{\sum _{l=1}^n1\!\!1(X_i\ge u)} \\&+o_p(1). \nonumber \end{aligned}$$
(16)

Under the asymptotic regularity conditions:

$$\begin{aligned}&\lim _{n\rightarrow \infty }n^{-1}\varvec{S}_k(\varvec{\beta },t)\mathop {\rightarrow }\limits ^{p}\varvec{s}_k(\varvec{\beta },t), \ \ \lim _{n\rightarrow \infty } Q(u,t)\mathop {\rightarrow }\limits ^{p}q(u,t),\\&\quad \lim _{n\rightarrow \infty }n^{-1}\sum _{l=1}^n1\!\!1(X_i\ge u)\mathop {\rightarrow }\limits ^{p} y(t) \end{aligned}$$

Exchange summation on (14) and combine with (7). When \(n\mathop {\rightarrow }\limits ^{p}\infty \),

$$\begin{aligned} (7)+(14) \!=\! n^{-1}\sum _{i=1}^{n}\int _0^t \left\{ f(x,\varvec{Z}_i,\varvec{v},p)-g(\varvec{\beta }_0,u,x)\right\} w_i(u,G_c)dM^1_i(u) + o_p(1), \end{aligned}$$

where

$$\begin{aligned} g(\varvec{\beta }_0,u,x)=\frac{1}{s_0(\varvec{\beta }_0,u)} \sum _{l=1}^n f(x,\varvec{Z}_l,\varvec{v},p)w_l(u,G_c) Y^1_l(u) \exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_l). \end{aligned}$$

Combine (8) and (15). When \(n\mathop {\rightarrow }\limits ^{p}\infty \),

$$\begin{aligned}&(8)+(15)= \nonumber \\&-\sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p) w_i(u,G_c) Y^1_i(u) \exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_i) \left\{ \varvec{Z}_i\! -\! \frac{\varvec{s}_1(\varvec{\beta }_0,u)}{s_0(\varvec{\beta }_0,u)} \right\} ^\mathsf T \\&d\varLambda _{10}^*(u) \left( \hat{\varvec{\beta }}-\varvec{\beta }_0\right) + o_p(1)\\&= n^{-1}\sum _{i=1}^n \left\{ \varvec{C}^\mathsf T (\varvec{\beta }_0,t,x)\right\} \varvec{\varOmega }^{-1}(\varvec{\eta _i}+\varvec{\psi _i}) + o_p(1), \end{aligned}$$

where

$$\begin{aligned} \varvec{C}(\varvec{\beta }_0,t,x) = -\sum _{i=1}^{n}\int _0^t f(x,\varvec{Z}_i,\varvec{v},p) w_i(u,G_c) Y^1_i(u)\\ \exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_i) \left\{ \varvec{Z}_i-\frac{\varvec{s}_1(\varvec{\beta }_0,u)}{s_0(\varvec{\beta }_0,u)}\right\} d\varLambda _{10}^*(u). \end{aligned}$$

Exchange summation on (16). When \(n\mathop {\rightarrow }\limits ^{p}\infty \),

$$\begin{aligned} (16)\!=\!n^{-1}\sum _{i=1}^{n}\int _0^{\infty } \frac{q(u,t)}{y(u)}v(\varvec{\beta }_0,u,x)dM^c_i(u)\!+\!o_p(1) =n^{-1}\sum _{i=1}^{n} q_i^{(f)}(t,x) \!+\!o_p(1) \end{aligned}$$

where

$$\begin{aligned} v(\varvec{\beta }_0,u,x)=\sum _{j=1}^{n}f(x,\varvec{Z}_j,\varvec{v},p) w_j(u,\hat{G}_c) Y^1_j(u)\exp (\varvec{\beta }_0^\mathsf T \varvec{Z}_j). \end{aligned}$$

So

$$\begin{aligned} B(t,x)=n^{-1} \sum _{i=1}^n \left\{ W_{i}(t,x)\right\} +o_p(1), \end{aligned}$$

where

$$\begin{aligned} W_{i}(t,x)&=\int _0^t \left\{ f(x,\varvec{Z}_i,\varvec{v},p)-g(\varvec{\beta }_0,u,x)\right\} w_i(u,{G}_c)dM^1_i(u)\nonumber \\&+\varvec{C}^\mathsf{T }(\varvec{\beta }_0,t,x) \varvec{\varOmega }^{-1} (\varvec{\eta _i}+\varvec{\psi _i}) - q_i^{(f)}(t,x), \end{aligned}$$
(17)

which can be consistently estimated by the plug-in estimators.

1.1 Test proportional subdistribution hazards assumption:

To check the proportional subdistribution hazards assumption, we consider the score process \(U_j(\hat{\varvec{\beta }},t)\) for each covariate, which can be written as

$$\begin{aligned} U_j(\hat{\varvec{\beta }},t) = \sum _{i=1}^{n}\int _0^t Z_{ij} w_i(u,\hat{G}_c)d\hat{M}^1_i(u), \, j=1,\ldots ,m. \end{aligned}$$

It is a special case of the general form \(\varvec{B}(t,x)\) with \(x=\infty \), \(p=1\) and \(\varvec{v}\) has \(1\) in \(j\)th element and \(0\) elsewhere. Under the null hypothesis,

$$\begin{aligned} U_j(\hat{\varvec{\beta }},t)&= n^{-1}\sum _{i=1}^{n}\left[ \int _0^t \left\{ Z_{ij}-\frac{s_{1j}(\varvec{\beta }_0,u)}{s_0(\varvec{\beta }_0,u)}\right\} w_i(u,{G}_c) dM^1_i(u)\right. \\&+\, \left. \varvec{C}_j^\mathsf{T }(t)\varvec{\varOmega }^{-1}(\varvec{\eta }_i + \varvec{\psi }_i) - q_i^{(f)}(t,x) \right] +o_p(1), \end{aligned}$$

where \(s_{1j}(\varvec{\beta }_0,u)\) is the \(j\)th element of \(\varvec{s}_1(\varvec{\beta },u)\) and

$$\begin{aligned} \varvec{C}_j(t) = -\sum _{i=1}^{n}\int _0^tZ_{ij}w_i(u,{G}_c)Y^1_i(u)\exp (\varvec{\beta }_0^\mathsf{T }\varvec{Z}_i)\left\{ \varvec{Z}_i-\frac{\varvec{s}_1(\varvec{\beta }_0,u)}{s_0(\varvec{\beta }_0,u)} \right\} d\varLambda _{10}^*(u). \end{aligned}$$

In practice, we standardized the process by multiplying \(I_{jj}^{-1}(\hat{\varvec{\beta }})\), and denoted \(B^{(p)}_j(t)=I_{jj}^{-1}(\hat{\varvec{\beta }})U_j(\hat{\varvec{\beta }},t)\) in the text.

1.2 Test linear functional form

To test the linear functional form for the \(j\)th covariate, we consider

$$\begin{aligned} B^{(f)}_j(x) = \sum _{i=1}^{n}\int _0^{\infty } 1\!\!1\left\{ Z_{ij} \le x\right\} w_i(u,\hat{G}_c) d\hat{M}^1_i(u), \,j=1,\ldots ,m \end{aligned}$$

which is a special case of the general form \(\varvec{B}(t,x)\) when \(t=\infty \), \(p=0\), and \(\varvec{v}\) has \(1\) in \(j\)th element and \(0\) elsewhere. Under the null hypothesis,

$$\begin{aligned} B^{(f)}_j(x) =&\ n^{-1}\sum _{i=1}^n \left[ \int _0^{\infty } \left\{ 1\!\!1(Z_{ij}\le x) - g_j(u,x)\right\} w_i(u,{G}_c)dM_i(u)\right. \\&\left. +\, \varvec{C}_j^\mathsf{T }(x) \varvec{\varOmega }^{-1}(\varvec{\eta }_i + \varvec{\psi }_i) - q_i^{(f)}(t,x)\right] + o_p(1), \end{aligned}$$

where

$$\begin{aligned} g_j(u,x)&= \frac{1}{s_0(\varvec{\beta }_0,u)}\sum _{l=1}^n 1\!\!1(Z_{lj} \le x) w_l(u,{G}_c) Y^1_l(u)\exp (\varvec{\beta }_0^\mathsf{T } \varvec{Z}_l),\\ \varvec{C}_j(x)&= -\sum _{l=1}^{n} \int _0^{\infty } 1\!\!1(Z_{lj} \le x) w_l(u,{G}_c) Y^1_l(u)\\&\exp (\varvec{\beta }_0^\mathsf{T } \varvec{Z}_l) \left\{ \varvec{Z}_l - \frac{\varvec{s}_1(\varvec{\beta }_0,u)}{s_0(\varvec{\beta }_0,u)}\right\} d\varLambda _{10}^*(u). \end{aligned}$$

1.3 Test link function

To test the link function, we consider

$$\begin{aligned} B^{(l)}(x) = \sum _{i=1}^{n}\int _0^{\infty } 1\!\!1\left\{ \hat{\varvec{\beta }}^\mathsf{T } \varvec{Z}_{i} \le x\right\} w_i(u,\hat{G}_c) d\hat{M}^1_i(u). \end{aligned}$$

In this case, \(t=\infty \), \(p=0\) and \(\varvec{v}=\hat{\varvec{\beta }}\). Under the null hypothesis, the test statistic has exactly the same format as the test statistic in the linear functional form, except the indictor function is replaced by \(1\!\!1\left\{ \hat{\varvec{\beta }}^\mathsf{T } \varvec{Z}_{i} \le x\right\} \). Note, if there is only one covariate, checking the function is equivalent to checking the functional form of the covariate.

1.4 Omnibus test

Here we consider

$$\begin{aligned} \varvec{B}^{(o)}(t,\varvec{z}) = \sum _{i=1}^{n}\int _0^t 1\!\!1\left\{ \varvec{Z}_{i} \le \varvec{z} \right\} w_i(u,\hat{G}_c)d\hat{M}^1_i(u). \end{aligned}$$

For the \(j\)th covariate is of interest for testing, the \(j\)th element in \(\varvec{B}^{(o)}(t,x)\) is the statistic we look for,

$$\begin{aligned} B^{(o)}_j(t,x) = \sum _{i=1}^{n}\int _0^t 1\!\!1\left\{ Z_{ij} \le x \right\} w_i(u,\hat{G}_c)d\hat{M}^1_i(u), \end{aligned}$$

which is a special case of the general form \(\varvec{B}(t,x)\) when \(p=0\) and and \(\varvec{v}\) has \(1\) in \(j\)th element and \(0\) elsewhere. The omnibus test also can be viewed as recording each linear function test through the time span. Under the null hypothesis,

$$\begin{aligned} B^{(o)}_j(t,x)&= n^{-1}\sum _{i=1}^{n} \left[ \int _0^t \{1\!\!1\left( Z_{ij} \le x\right) - g_j(u,x)\} w_i(u,{G}_c)dM_i(u)\right. \\&\left. + \sum _{i=1}^n \varvec{C}_j^\mathsf{T }(t,x) \varvec{\varOmega }^{-1}(\varvec{\eta }_i+\varvec{\psi }_i) - q_i^{(f)}(t,x)\right] +o_p(1), \end{aligned}$$

where

$$\begin{aligned} g_j(u,x)&= \frac{1}{s_0(\varvec{\beta }_0,u)}\sum _{l=1}^n 1\!\!1\left( Z_{lj} \le x\right) w_l(u,{G}_c)Y^1_l(u) \exp (\varvec{\beta }_0^\mathsf{T } \varvec{Z}_l),\\ \varvec{C}_j(t,x)&= -\sum _{l=1}^{n} \int _0^{t} 1\!\!1\left( Z_{lj} \le x \right) w_l(u,{G}_c) Y^1_l(u)\\&\quad \exp (\varvec{\beta }_0^\mathsf{T } \varvec{Z}) \left\{ \varvec{Z}_l - \frac{\varvec{s}_1(\varvec{\beta }_0,u)}{s_0(\varvec{\beta }_0,u)}\right\} d\varLambda _{10}^*(u). \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Scheike, T.H. & Zhang, MJ. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals. Lifetime Data Anal 21, 197–217 (2015). https://doi.org/10.1007/s10985-014-9313-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10985-014-9313-9

Keywords

Navigation