Skip to contents

Overview

We consider one- and two-sided hypothesis testing using a group sequential design with possibly non-constant treatment effect. That can be useful for situations such as an assumed non-proportional hazards model or using weighted logrank tests for a time-to-event endpoint. Asymptotic distributional assumptions for this document have been laid out in the vignette Non-Proportional Effect Size in Group Sequential Design. In general, we assume \(K\ge 1\) analyses with statistical information \(\mathcal{I}_k\) and information fraction \(t_k=\mathcal{I}_k/\mathcal{I}_k\) at analysis \(k\), \(1\le k\le K\). We denote the null hypothesis \(H_{0}\): \(\theta(t)=0\) and an alternate hypothesis \(H_1\): \(\theta(t)=\theta_1(t)\) for \(t> 0\) where \(t\) represents the information fraction for a study. While a study is planned to stop at information fraction \(t=1\), we define \(\theta(t)\) for \(t>0\) since a trial can overrun its planned statistical information at the final analysis. As before, we use a shorthand notation in to have \(\theta\) represent \(\theta()\), \(\theta=0\) to represent \(\theta(t)\equiv 0\) for all \(t\) and \(\theta_1\) to represent \(\theta_i(t_k)\), the effect size at analysis \(k\), \(1\le k\le K\).

For our purposes, \(H_0\) will represent no treatment difference, but it could represent a non-inferiority hypothesis. Recall that we assume \(K\) analyses and bounds \(-\infty \le a_k< b_k<\le \infty\) for \(1\le k < K\) and \(-\infty \le a_K\le b_K<\infty\). We denote the probability of crossing the upper boundary at analysis \(k\) without previously crossing a bound by

\[\alpha_{k}(\theta)=P_{\theta}(\{Z_{k}\geq b_{k}\}\cap_{j=1}^{i-1}\{a_{j}\le Z_{j}< b_{j}\}),\] \(k=1,2,\ldots,K.\) The total probability of crossing an upper bound prior to crossing a lower bound is denoted by

\[\alpha(\theta)\equiv\sum_{k=1}^K\alpha_k(\theta).\] We denote the probability of crossing a lower bound at analysis \(k\) without previously crossing any bound by

\[\beta_{k}(\theta)=P_{\theta}((Z_{k}< a_{k}\}\cap_{j=1}^{k-1}\{ a_{j}\le Z_{j}< b_{j}\}).\]

Efficacy bounds \(b_k\), \(1\le k\le K\), for a group sequential design will be derived to control Type I at some level \(\alpha=\alpha(0)\).

Lower bounds \(a_k\), \(1\le k\le K\) may be used to control boundary crossing probabilities under either the null hypothesis (2-sided testing), the alternate hypothesis or some other hypothesis (futility testing).

Thus, we may consider up to 3 values of \(\theta(t)\):

  • under the null hypothesis \(\theta_0(t)=0\) for computing efficacy bounds,
  • under a value \(\theta_1(t)\) for computing lower bounds, and
  • under a value \(\theta_a(t)\) for computing sample size or power.

We refer to the information under these 3 assumptions as \(\mathcal{I}^{(0)}(t)\), \(\mathcal{I}^{(1)}(t)\), and \(\mathcal{I}^{(a)}(t)\), respectively. Often we will assume \(\mathcal{I}(t)=\mathcal{I}^{(0)}(t)=\mathcal{I}^{(1)}(t)=\mathcal{I}^{(a)}(t).\)

We note that information may differ under different values of \(\theta(t)\). For fixed designs, computes sample size based on different variances under the null and alternate hypothesis.

Two-sided testing and design

We denote an alternative \(H_{1}\): \(\theta(t)=\theta_1(t)\); we will always assume \(H_1\) for power calculations and sometimes will use \(H_1\) for controlling lower boundary \(a_k\) crossing probabilities. A value of \(\theta(t)>0\) will reflect a positive benefit. We will not restrict the alternate hypothesis to \(\theta_1(t)>0\) for all \(t\). The value of \(\theta(t)\) will be referred to as the (standardized) treatment effect at information fraction \(t\).

We assume there is interest in stopping early if there is good evidence to reject one hypothesis in favor of the other.

If \(a_k= -\infty\) at analysis \(k\) for some \(1\le k\le K\) then the alternate hypothesis cannot be rejected at analysis \(k\); i.e., there is no futility bound at analysis \(k\). For \(k=1,2,\ldots,K\), the trial is stopped at analysis \(k\) to reject \(H_0\) if \(a_j<Z_j< b_j\), \(j=1,2,\dots,i-1\) and \(Z_k\geq b_k\). If the trial continues until stage \(k\) without crossing a bound and \(Z_k\leq a_k\) then \(H_1\) is rejected in favor of \(H_0\), \(k=1,2,\ldots,K\). Note that if \(a_K< b_K\) there is the possibility of completing the trial without rejecting \(H_0\) or \(H_1\).

Haybittle-Peto and spending bounds

The recursive algorithm of the previous section allows computation of both spending bounds and Haybittle-Peto bounds. For a Haybittle-Peto efficacy bound, one would normally set \(b_k=\Phi^{-1}(1-\epsilon)\) for \(k=1,2,\ldots,K-1\) and some small \(\epsilon>0\) such as \(\epsilon= 0.001\) which yields \(b_k=3.09\). While the original proposal was to use \(b_K=\Phi^{-1}(1-\alpha)\) at the final analysis, to fully control one-sided Type I error at level \(\alpha\) we suggest computing the final bound \(b_K\) using the above algorithm so that \(\alpha(0)=\alpha\).

Bounds computed with spending \(\alpha_k(0)\) at analysis \(k\) can be computed by using equation (9) for \(b_1\). Then for \(k=2,\ldots,K\) the algorithm of the previous section is used. As noted by Jennison and Turnbull (2000), \(b_1,\ldots,b_K\) if determined under the null hypothesis depend only on \(t_k\) and \(\alpha_k(0)\) with no dependence on \(\mathcal{I}_k\), \(k=1,\ldots,K\). When computing bounds based on \(\beta_k(\theta)\), \(k=1,\ldots,K\), where some \(\theta(t_k)\neq 0\) we have an additional dependency with \(a_k\) depending not only on \(t_k\) and \(b_k\), \(k=1,\ldots,K\), but also on the final total information \(\mathcal{I}_K\). Thus, a spending bound under something other than the null hypothesis needs to be recomputed each time \(\mathcal{I}_K\) changes, whereas it only needs to be computed once when \(\theta(t_k)=0\), \(k=1,\ldots,K\).

Bounds based on boundary families

Assume constants \(b_1^*,\ldots,b_K^*\) and a total targeted one-sided Type I error \(\alpha\). We wish to find \(C_u\) as a function of \(t_1,\ldots t_K\) such that if \(b_k=C_ub_k^*\) then \(\alpha(0)=\alpha.\) Thus, the problem is to solve for \(C_u\). If \(a_k\), \(k=1,2,\ldots,K\) are fixed then this is a simple root finding problem. Since one normally normally uses non-binding efficacy bounds, it will normally be the case that \(a_k=-\infty\), \(k=1,\ldots,K\) for this problem.

Now we assume constants \(a_k^*\) and wish to find \(C_l\) such that if \(a_k=C_la_k^*+\theta(t_k)\sqrt{\mathcal{I}_k}\) for \(k=1,\ldots,K\) then \(\beta(\theta)=\beta\). If we use the constant upper bounds from the previous paragraph, finding \(C_l\) is a simple root-finding problem.

For 2-sided symmetric bounds with \(a_k=-b_k\), \(k=1,\ldots,K\), we only need to solve for \(C_u\) and again use simple root finding.

At this point, we do not solve for this type of bound for asymmetric upper and lower bounds.

Sample size

For sample size, we assume \(t_k\), and \(\theta(t_k)\) \(1,\ldots,K\) are fixed. We assume \(\beta(\theta)\) is decreasing as \(\mathcal{I}\) is decreasing. This will automatically be the case when \(\theta(t_k)>0\), \(k=1,\ldots,K\) and for many other cases. Thus, the information required is done by a search for \(\mathcal{I_K}\) that yields \(\alpha(\theta)\) yields the targeted power.

References

Jennison, Christopher, and Bruce W. Turnbull. 2000. Group Sequential Methods with Applications to Clinical Trials. Boca Raton, FL: Chapman; Hall/CRC.