```
library(rpact)
packageVersion("rpact")
```

# Designing Group Sequential Trials with a Binary Endpoint with rpact

# Sample size calculation for a superiority trial with two groups without interim analyses

The **sample size** for a trial with binary endpoints can be calculated using the function `getSampleSizeRates()`

. This function is fully documented in the help page (`?getSampleSizeRates`

). Hence, we only provide some examples below.

First, load the rpact package.

`[1] '3.5.1'`

To get the **direction** of the effects correctly, note that in rpact the **index “2” in an argument name always refers to the control group, “1” to the intervention group, and treatment effects compare treatment versus control**. Specifically, for binary endpoints, the probabilities of an event in the control group and intervention group, respectively, are given by arguments `pi2`

and `pi1`

. The default treatment effect is the absolute risk difference `pi1 - pi2`

but the relative risk scale `pi1/pi2`

is also supported if the argument `riskRatio`

is set to `TRUE`

.

```
# Example of a standard trial:
# - probability 25% in control (pi2 = 0.25) vs. 40% (pi1 = 0.4) in intervention
# - one-sided test (sided = 1)
# - Type I error 0.025 (alpha = 0.025) and power 80% (beta = 0.2)
<- getSampleSizeRates(
sampleSizeResult pi2 = 0.25, pi1 = 0.4,
sided = 1, alpha = 0.025, beta = 0.2
)kable(sampleSizeResult)
```

**Design plan parameters and output for rates**

**Design parameters**

*Critical values*: 1.960*Significance level*: 0.0250*Type II error rate*: 0.2000*Test*: one-sided

**User defined parameters**

*Assumed treatment rate*: 0.400*Assumed control rate*: 0.250

**Default parameters**

*Risk ratio*: FALSE*Theta H0*: 0*Normal approximation*: TRUE*Treatment groups*: 2*Planned allocation ratio*: 1

**Sample size and output**

*Direction upper*: TRUE*Number of subjects fixed*: 303.7*Number of subjects fixed (1)*: 151.9*Number of subjects fixed (2)*: 151.9*Critical values (treatment effect scale)*: 0.103

**Legend**

*(i)*: values of treatment arm i

As per the output above, the required **total sample size** is 304 and the critical value corresponds to a minimal detectable difference (on the absolute risk difference scale, the default) of approximately 0.103. This calculation assumes that pi2 = 0.25 is the observed rate in treatment group 2.

A useful summary is provided with the generic `summary()`

function:

`kable(summary(sampleSizeResult))`

**Sample size calculation for a binary endpoint**

Fixed sample analysis, significance level 2.5% (one-sided). The results were calculated for a two-sample test for rates (normal approximation), H0: pi(1) - pi(2) = 0, H1: treatment rate pi(1) = 0.4, control rate pi(2) = 0.25, power 80%.

Stage | Fixed |
---|---|

Efficacy boundary (z-value scale) | 1.960 |

Number of subjects | 303.7 |

One-sided local significance level | 0.0250 |

Efficacy boundary (t) | 0.103 |

Legend:

*(t)*: treatment effect scale

You can change the randomization allocation between the treatment groups using `allocationRatioPlanned`

:

```
# Example: Extension of standard trial
# - 2(intervention):1(control) randomization (allocationRatioPlanned = 2)
kable(summary(getSampleSizeRates(
pi2 = 0.25, pi1 = 0.4,
sided = 1, alpha = 0.025, beta = 0.2,
allocationRatioPlanned = 2
)))
```

**Sample size calculation for a binary endpoint**

Fixed sample analysis, significance level 2.5% (one-sided). The results were calculated for a two-sample test for rates (normal approximation), H0: pi(1) - pi(2) = 0, H1: treatment rate pi(1) = 0.4, control rate pi(2) = 0.25, planned allocation ratio = 2, power 80%.

Stage | Fixed |
---|---|

Efficacy boundary (z-value scale) | 1.960 |

Number of subjects | 346.3 |

One-sided local significance level | 0.0250 |

Efficacy boundary (t) | 0.104 |

Legend:

*(t)*: treatment effect scale

`allocationRatioPlanned = 0`

can be defined in order to obtain the optimum allocation ratio minimizing the overall sample size (the optimum ample size is only slightly smaller than sample size with equal allocation; practically, this has no effect):

```
# Example: Extension of standard trial
# optimum randomization ratio
kable(summary(getSampleSizeRates(
pi2 = 0.25, pi1 = 0.4,
sided = 1, alpha = 0.025, beta = 0.2,
allocationRatioPlanned = 0
)))
```

**Sample size calculation for a binary endpoint**

Fixed sample analysis, significance level 2.5% (one-sided). The results were calculated for a two-sample test for rates (normal approximation), H0: pi(1) - pi(2) = 0, H1: treatment rate pi(1) = 0.4, control rate pi(2) = 0.25, optimum planned allocation ratio = 0.953, power 80%.

Stage | Fixed |
---|---|

Efficacy boundary (z-value scale) | 1.960 |

Number of subjects | 303.6 |

One-sided local significance level | 0.0250 |

Efficacy boundary (t) | 0.103 |

Legend:

*(t)*: treatment effect scale

**Power** at given sample size can be calculated using the function `getPowerRates()`

. This function has the same arguments as `getSampleSizeRates()`

except that the maximum total sample size needs to be defined (`maxNumberOfSubjects`

) and the Type II error `beta`

is no longer needed. For one-sided tests, the direction of the test is also required. The default `directionUpper = TRUE`

indicates that for the alternative the probability in the intervention group `pi1`

is larger than the probability in the control group `pi2`

(`directionUpper = FALSE`

is the other direction):

```
# Example: Calculate power for a simple trial with total sample size 304
# as in the example above in case of pi2 = 0.25 (control) and
# pi1 = 0.37 (intervention)
<- getPowerRates(
powerResult pi2 = 0.25, pi1 = 0.37,
maxNumberOfSubjects = 304, sided = 1, alpha = 0.025
)kable(powerResult)
```

**Design plan parameters and output for rates**

**Design parameters**

*Critical values*: 1.960*Significance level*: 0.0250*Test*: one-sided

**User defined parameters**

*Assumed treatment rate*: 0.370*Assumed control rate*: 0.250*Maximum number of subjects*: 304

**Default parameters**

*Risk ratio*: FALSE*Theta H0*: 0*Normal approximation*: TRUE*Treatment groups*: 2*Planned allocation ratio*: 1*Direction upper*: TRUE

**Power and output**

*Effect*: 0.12*Overall reject*: 0.6196*Number of subjects fixed*: 304*Number of subjects fixed (1)*: 152*Number of subjects fixed (2)*: 152*Critical values (treatment effect scale)*: 0.103

**Legend**

*(i)*: values of treatment arm i

The calculated **power** is provided in the output as **“Overall reject”** and is 0.620 for the example.

The `summary()`

command produces the output

`kable(summary(powerResult))`

**Power calculation for a binary endpoint**

Fixed sample analysis, significance level 2.5% (one-sided). The results were calculated for a two-sample test for rates (normal approximation), H0: pi(1) - pi(2) = 0, power directed towards larger values, H1: treatment rate pi(1) = 0.37, control rate pi(2) = 0.25, number of subjects = 304.

Stage | Fixed |
---|---|

Efficacy boundary (z-value scale) | 1.960 |

Power | 0.6196 |

Number of subjects | 304.0 |

One-sided local significance level | 0.0250 |

Efficacy boundary (t) | 0.103 |

Legend:

*(t)*: treatment effect scale

The `getPowerRates()`

(as well as `getSampleSizeRates()`

) functions can also be called with a vector argument for the probability `pi1`

in the intervention group. This is illustrated below via a plot of power depending on this probability. For examples of all available plots, see the R Markdown document How to create admirable plots with rpact.

```
# Example: Calculate power for simple design (with sample size 304 as above)
# for probabilities in intervention ranging from 0.3 to 0.5
<- getPowerRates(
powerResult pi2 = 0.25, pi1 = seq(0.3, 0.5, by = 0.01),
maxNumberOfSubjects = 304, sided = 1, alpha = 0.025
)
# one of several possible plots, this one plotting true effect size vs power
plot(powerResult, type = 7)
```

# Sample size calculation for a non-inferiority trial with two groups without interim analyses

Sample size calculation proceeds in the same fashion as for superiority trials except that the role of the null and the alternative hypothesis are reversed. I.e., in this case, the non-inferiority margin \(\Delta\) corresponds to the treatment effect under the null hypothesis (`thetaH0`

) which one aims to reject. Testing in non-inferiority trials is always one-sided.

```
# Example: Sample size for a non-inferiority trial
# Assume pi(control) = pi(intervention) = 0.2
# Test H0: pi1 - pi2 = 0.1 (risk increase in intervention >= Delta = 0.1)
# vs. H1: pi1 - pi2 < 0.1
<- getSampleSizeRates(
sampleSizeNoninf pi2 = 0.2, pi1 = 0.2,
thetaH0 = 0.1, sided = 1, alpha = 0.025, beta = 0.2
)kable(sampleSizeNoninf)
```

**Design plan parameters and output for rates**

**Design parameters**

*Critical values*: 1.960*Significance level*: 0.0250*Type II error rate*: 0.2000*Test*: one-sided

**User defined parameters**

*Theta H0*: 0.1*Assumed treatment rate*: 0.200

**Default parameters**

*Risk ratio*: FALSE*Normal approximation*: TRUE*Assumed control rate*: 0.200*Treatment groups*: 2*Planned allocation ratio*: 1

**Sample size and output**

*Direction upper*: FALSE*Number of subjects fixed*: 508.4*Number of subjects fixed (1)*: 254.2*Number of subjects fixed (2)*: 254.2*Critical values (treatment effect scale)*: 0.0285

**Legend**

*(i)*: values of treatment arm i

`kable(summary(sampleSizeNoninf))`

**Sample size calculation for a binary endpoint**

Fixed sample analysis, significance level 2.5% (one-sided). The results were calculated for a two-sample test for rates (normal approximation), H0: pi(1) - pi(2) = 0.1, H1: treatment rate pi(1) = 0.2, control rate pi(2) = 0.2, power 80%.

Stage | Fixed |
---|---|

Efficacy boundary (z-value scale) | 1.960 |

Number of subjects | 508.4 |

One-sided local significance level | 0.0250 |

Efficacy boundary (t) | 0.028 |

Legend:

*(t)*: treatment effect scale

# Sample size calculation for a single arm trial without interim analyses

The function `getSampleSizeRates()`

allows to set the number of `groups`

(which is 2 by default) to 1 for the design of single arm trials. The probability under the null hypothesis can be specified with the argument `thetaH0`

and the specific alternative hypothesis which is used for the sample size calculation with the argument `pi1`

. The sample size calculation can be based either on a normal approximation (`normalApproximation = TRUE`

, the default) or on exact binomial probabilities (`normalApproximation = FALSE`

).

```
# Example: Sample size for a single arm trial which tests
# H0: pi = 0.1 vs. H1: pi = 0.25
# (use conservative exact binomial calculation)
<- getSampleSizeRates(
samplesSizeResults groups = 1, thetaH0 = 0.1, pi1 = 0.25,
normalApproximation = FALSE, sided = 1, alpha = 0.025, beta = 0.2
)
kable(summary(samplesSizeResults))
```

**Sample size calculation for a binary endpoint**

Fixed sample analysis, significance level 2.5% (one-sided). The results were calculated for a one-sample test for rates (exact test), H0: pi = 0.1, H1: treatment rate pi = 0.25, power 80%.

Stage | Fixed |
---|---|

Efficacy boundary (z-value scale) | 1.960 |

Number of subjects | 53.0 |

One-sided local significance level | 0.0250 |

Efficacy boundary (t) | 0.181 |

Legend:

*(t)*: treatment effect scale

# Sample size calculation for group sequential designs

Sample size calculation for a group sequential trial is performed in **two steps**:

**Define the (abstract) group sequential design**using the function`getDesignGroupSequential()`

. For details regarding this step, see the vignette Defining group sequential boundaries with rpact.**Calculate sample size**for the binary endpoint by feeding the abstract design into the function`getSampleSizeRates()`

. Note that the power 1 - beta needs to be defined in the design function, and not in`getSampleSizeRates()`

.

In general, rpact supports both one-sided and two-sided group sequential designs. However, if futility boundaries are specified, only one-sided tests are permitted.

R code for a simple example is provided below:

```
# Example: Group-sequential design with O'Brien & Fleming type alpha-spending and
# one interim at 60% information
<- getDesignGroupSequential(
design sided = 1, alpha = 0.025, beta = 0.2,
informationRates = c(0.6, 1), typeOfDesign = "asOF"
)
# Sample size calculation assuming event probabilities are 25% in control
# (pi2 = 0.25) vs 40% (pi1 = 0.4) in intervention
<- getSampleSizeRates(design, pi2 = 0.25, pi1 = 0.4)
sampleSizeResultGS # Standard rpact output (sample size object only, not design object)
kable(sampleSizeResultGS)
```

**Design plan parameters and output for rates**

**Design parameters**

*Information rates*: 0.600, 1.000*Critical values*: 2.669, 1.981*Futility bounds (binding)*: -Inf*Cumulative alpha spending*: 0.003808, 0.025000*Local one-sided significance levels*: 0.003808, 0.023798*Significance level*: 0.0250*Type II error rate*: 0.2000*Test*: one-sided

**User defined parameters**

*Assumed treatment rate*: 0.400*Assumed control rate*: 0.250

**Default parameters**

*Risk ratio*: FALSE*Theta H0*: 0*Normal approximation*: TRUE*Treatment groups*: 2*Planned allocation ratio*: 1

**Sample size and output**

*Direction upper*: TRUE*Maximum number of subjects*: 306.3*Maximum number of subjects (1)*: 153.2*Maximum number of subjects (2)*: 153.2*Number of subjects [1]*: 183.8*Number of subjects [2]*: 306.3*Reject per stage [1]*: 0.3123*Reject per stage [2]*: 0.4877*Early stop*: 0.3123*Expected number of subjects under H0*: 305.9*Expected number of subjects under H0/H1*: 299.3*Expected number of subjects under H1*: 268.1*Critical values (treatment effect scale) [1]*: 0.187*Critical values (treatment effect scale) [2]*: 0.104

**Legend**

*(i)*: values of treatment arm i*[k]*: values at stage k

The `summary()`

command produces the output

`kable(summary(sampleSizeResultGS))`

**Sample size calculation for a binary endpoint**

Sequential analysis with a maximum of 2 looks (group sequential design), overall significance level 2.5% (one-sided). The results were calculated for a two-sample test for rates (normal approximation), H0: pi(1) - pi(2) = 0, H1: treatment rate pi(1) = 0.4, control rate pi(2) = 0.25, power 80%.

Stage | 1 | 2 |
---|---|---|

Information rate | 60% | 100% |

Efficacy boundary (z-value scale) | 2.669 | 1.981 |

Overall power | 0.3123 | 0.8000 |

Number of subjects | 183.8 | 306.3 |

Expected number of subjects under H1 | 268.1 | |

Cumulative alpha spent | 0.0038 | 0.0250 |

One-sided local significance level | 0.0038 | 0.0238 |

Efficacy boundary (t) | 0.187 | 0.104 |

Exit probability for efficacy (under H0) | 0.0038 | |

Exit probability for efficacy (under H1) | 0.3123 |

Legend:

*(t)*: treatment effect scale

System: rpact 3.5.1, R version 4.3.2 (2023-10-31 ucrt), platform: x86_64-w64-mingw32

To cite R in publications use:

R Core Team (2023). *R: A Language and Environment for Statistical Computing*. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/. To cite package ‘rpact’ in publications use:

Wassmer G, Pahlke F (2024). *rpact: Confirmatory Adaptive Clinical Trial Design and Analysis*. R package version 3.5.1, https://www.rpact.com, https://github.com/rpact-com/rpact, https://rpact-com.github.io/rpact/, https://www.rpact.org.