Designs for single experimental therapies including randomisation

4
Designs for single experimental therapies including randomisation


Sarah Brown


The designs included in this chapter incorporate randomisation to a control arm with the intention of a formally powered statistical comparison between the experimental and control arms, as well as designs where incorporation of randomisation is primarily to provide a calibration arm, with no statistical comparison formally powered. The distinction between these approaches is presented for each design listed.


4.1 One-stage designs


4.1.1 Binary outcome measure


Herson and Carter (1986)



  • One-stage, binary outcome
  • No formally powered statistical comparison between arms
  • Requires programming

Herson and Carter consider the inclusion of a randomised calibration group in single-stage phase II trials of a binary endpoint, in order to reduce the risk of false-negative decision-making. Patients are randomised between current standard treatment (calibration group) and the treatment under investigation. Results of the calibration group are intended largely to assess the credibility of the outcome in the experimental arm, that is, not for formal comparative purposes. Decision criteria are based primarily on the experimental arm results; however, outcomes in the calibration arm are also considered to address the initial assumptions made regarding the current standard treatment. Thus the trial essentially constitutes two separate designs, one for the experimental arm and one for the calibration arm. Due to the assessment of the control arm results, the overall sample size of the trial may be between three and five times that of a non-calibrated design. An example is provided; however, the design will require programming.


Thall and Simon (1990)



  • One-stage, binary outcome
  • No formally powered statistical comparison between arms
  • Requires programming

Thall and Simon outline a design that incorporates historical data, including variability, into the design of the trial. A specific proportion of patients are randomised to a control arm dependent upon the amount of historical control data available, the degree of both inter-study and intra-study variability and the overall sample size of the phase II study being planned (following formulae provided). The inclusion of a sample of patients randomised to a control arm allows the precision of the response rate in the experimental arm at the end of the trial to be maximised, relative to the control. Sample size is determined iteratively and the design would need to be programmed to allow implementation.


Stone et al. (2007b)



  • One-stage, binary outcome
  • Formally powered statistical comparison between arms
  • Standard software available

Stone et al. discuss the use of progressive disease rate at a given time point (as well as overall progression-free survival) as an outcome measure in randomised phase II trials of cytostatic agents. Formal comparison between the experimental treatment and the control treatment is performed for superiority; however, larger type I error rates than would be used in phase III are incorporated, and large treatment effects are targeted. The use of relaxed type I errors and large targeted treatment effects contribute to reduced sample sizes compared to phase III trials, and may therefore be deemed more realistic for phase II trials.


4.1.2 Continuous outcome measure


Thall and Simon (1990)



  • One-stage, continuous outcome
  • No formally powered statistical comparison between arms
  • Requires programming

Thall and Simon outline a design that incorporates historical data, including variability, into the design of the trial. A specific proportion of patients are randomised to a control arm dependent upon the amount of historical control data available, the degree of both inter-study and intra-study variability and the overall sample size of the phase II study being planned (following formulae provided). The inclusion of a sample of patients randomised to a control arm allows the precision of the outcome estimate in the experimental arm at the end of the trial to be maximised, relative to the control. Sample size is determined iteratively and the design would need to be programmed to allow implementation.


Chen and Beckman (2009)



  • One-stage, continuous outcome
  • Formally powered statistical comparison between arms
  • Programming code provided

Chen and Beckman describe an approach to a randomised phase II trial design that incorporates optimal error rates. Optimal type I and II errors for the design are identified by means of an efficiency score function which is based on initial proposed error rates and the ratio of sample sizes between phases II and III. Sample size calculation is performed using standard phase III-type approaches using the optimal identified type I and II errors. Formal comparison with the control arm is incorporated. The design considers cost efficiency of the phase II and III trials, on the basis of the ratio of sample sizes between phases II and III and the a priori probability of success of the investigational treatment. An R program is provided in the appendix of the manuscript to identify optimal designs.


4.1.3 Multinomial outcome measure


No references identified.


4.1.4 Time-to-event outcome measure


Simon et al. (2001)



  • One-stage, time-to-event outcome
  • Formally powered statistical comparison between arms
  • Standard software available

Simon and colleagues propose what is termed a randomised ‘phase 2.5’ trial design, incorporating intermediate outcome measures such as progression-free survival. The design takes the approach of a phase III trial design, with a formally powered statistical comparison with the control arm for superiority. It incorporates a relaxed significance level, large targeted treatment effects and intermediate outcome measures, resulting in more pragmatic and feasible sample sizes than would be required in a phase III trial. The design is straightforward, following the methodology of phase III trials; however, it is important to note that this should only be used where large treatment differences are realistic and should not be seen as a way to eliminate phase III testing.


Stone et al. (2007b)



  • One-stage, time-to-event outcome
  • Formally powered statistical comparison between arms
  • Standard software available

Stone et al. discuss the use of progressive disease rate at a given time point, as well as overall progression-free survival, as an outcome measure in randomised phase II trials of cytostatic agents. Formal comparison between the experimental treatment and the control treatment is performed for superiority; however, larger type I error rates than would be used in phase III are incorporated, and large treatment effects are targeted. The use of relaxed type I errors and large targeted treatment effects contribute to reduced sample sizes compared to phase III trials, and may therefore be deemed more realistic for phase II trials. This reflects the designs described above by Simon et al. in the setting of time-to-event outcomes, which are described by the authors as ‘phase 2.5’ designs (Simon et al. 2001).


Chen and Beckman (2009)



  • One-stage, time-to-event outcome
  • Formally powered statistical comparison between arms
  • Programming code provided

Chen and Beckman describe an approach to a randomised phase II trial design that incorporates optimal error rates. Optimal type I and II errors for the design are identified by means of an efficiency score function which is based on initial proposed error rates and the ratio of sample sizes between phases II and III. Sample size calculation is performed using standard phase III-type approaches using the optimal identified type I and II errors. Formal comparison with the control arm is incorporated. The design considers cost efficiency of the phase II and III trials, on the basis of the ratio of sample sizes between phases II and III and the a priori probability of success of the investigational treatment. An R program is provided in the appendix of the manuscript to identify optimal designs.


4.1.5 Ratio of times to progression


No references identified.


4.2 Two-stage designs


4.2.1 Binary outcome measure


Whitehead et al. (2009)



  • Two-stage, binary outcome
  • Formally powered statistical comparison between arms
  • Requires programming
  • Early termination for activity or lack of activity

Whitehead and colleagues outline a randomised controlled two-stage design with normally distributed outcome measures that may be extended to the setting of binary and ordinal outcomes. The design allows early termination for activity, or lack of activity, and incorporates formal comparison between experimental and control arms. At the interim assessment, which takes place after approximately half the total number of patients have been recruited, sample size re-estimation may be incorporated if necessary. The methodology employs approximations to the normal distribution since sample sizes are generally large enough. No software is detailed as being available to identify designs; however, programming is noted as being possible in SAS, and detail is provided to allow its implementation. Simulation is also required to evaluate potential designs.


Jung (2008)



  • Two-stage, binary outcome
  • Formally powered statistical comparison between arms
  • Programs noted as being available from author
  • Early termination for lack of activity

Jung proposes a randomised controlled extension to Simon’s optimal and minimax designs (Simon 1989) in the context of a binary outcome measure (e.g. response). The experimental arm is formally compared with the control arm and declared worthy of further investigation only if there are sufficiently more responders in the experimental arm. Extensive tables are provided, and programs to identify designs not included in tables are noted as being available upon request from the author. Extensions to the design include unequal allocation, strict type I and II error control and randomisation to more than one experimental arm.


Jung and George (2009)



  • Two-stage, binary outcome
  • Formally powered statistical comparison between arms
  • Requires minimal programming
  • Early termination for lack of efficacy

Stay updated, free articles. Join our Telegram channel

Jun 13, 2016 | Posted by in ONCOLOGY | Comments Off on Designs for single experimental therapies including randomisation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access