# XLSTAT-Power Analysis

**
View tutorial**sXLSTAT-Power Analysis is an Excel add-in which has been developed to provide XLSTAT users with a powerful solution for computing and controling the power of statistical tests. Calculating the power or the type II error (also named beta risk) of a test is a key step for anyone who wants to set up an experiment in order to confront a hypothesis to the reality.

All XLSTAT-Power Analysis functions have been intensively tested against other software to guarantee the users fully reliable results, and to allow you to integrate this software in your Six Sigma business improvement process.

## Features

Statistical Power for:

- Mean comparison
- Variance comparison
- Proportion comparison
- Correlation comparison
- ANOVA / ANCOVA / Repeated measures
- Linear regression
- Logistic regression
- Cox Model
- Sample size for clinical trials

## Demo version

A trial version of XLSTAT-Power Analysis is included in the main XLSTAT download.

## Prices and ordering

These analyses are included in the XLStat-Biomed, XLStat-Quality and XLStat-Premium packages.

# DETAILED DESCRIPTIONS

# Statistical Power for mean comparison

View a tutorial

View a tutorial

### Statistical Power analysis for the comparison of means

There are several tests to compare means, in XLSTAT we offer namely the t and z tests. XLSTAT-Power allows estimating the power of these tests and calculates the number of observations required to obtain sufficient power.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H
_{0}and the alternative hypothesis H_{a}. - The statistical test to use.
- The type I error also known as alpha. It occurs when one rejects the null hypothesis when it is true. It is set a priori for each test and is 5%.

The type II error or beta is less studied but is of great importance. In fact, it represents the probability that one does not reject the null hypothesis when it is false. We cannot fix it upfront, but based on other parameters of the model we can try to minimize it. The power of a test is calculated as 1-beta and represents the probability that we reject the null hypothesis when it is false.

We therefore wish to maximize the power of the test. The XLSTAT-Power module calculates the power (and beta) when other parameters are known. For a given power, it also allows to calculate the sample size that is necessary to reach that power.

The statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.

XLSTAT allows to compare:

- A mean to a constant (with z and t-tests)
- Two means associated with paired samples (with z and t-tests)
- Two means associated with independent samples (with z and t-tests)

We use the t-test when the variance of the population is estimated and the z-test when it is known. In each case, the parameters will be different and will be shown in the dialog box.

### How is the Statistical Power calculated

The power of a test is usually obtained by using the associated non-central distribution. Thus, for the t-test, the non-central Student distribution is used and the z-test the Normal distribution.

#### Satistical Power for a T-test or Z-test for one sample

The power of this test is obtained using the non-central Student distribution with non-centrality parameter (NCP):

NCP = |(X - X_{0})/SD) - √N|

With X_{0} theoretical mean and SD standard deviation.

The part X - X_{0})/SD is called the effect size.

#### T-test or Z-test for two paired samples

The same formula as for the one sample case applies, but the standard deviation is calculated differently, we have:

NCP = |(X_{1} - X_{2})/SD_{Diff}) - √N|

With SD_{Diff}= √(SD_{1}² + SD_{2}²) – 2 corr*SD_{1}SD_{2} and Corr is the correlation between the two samples.

The part X_{1} - X_{2})/SD_{Diff} is the effect size.

#### T-test or Z-test for two independent samples

In the case of two independent samples, the standard deviation is calculated differently and we use the harmonic mean of the number of observations.

### Calculating sample size using the statistical power of a test

To calculate the number of observations required, XLSTAT uses an algorithm that searches for the root of a function. It is called the Van Wijngaarden-Dekker-Brent algorithm (Brent, 1973). This algorithm is adapted to the case where the derivatives of the function are not known. It tries to find the root of:

power (N) - expected_power

We then obtain the size N such that the test has a power as close as possible to the desired power.

### Effect size in power calculation

This concept is very important in power calculations. Indeed, Cohen (1988) developed this concept. The effect size is a quantity that will allow calculating the power of a test without entering any parameters but will tell if the effect to be tested is weak or strong.

In the context of comparisons of means, the conventions of magnitude of the effect size are:

- d=0.2, the effect is small.
- d=0.5, the effect is moderate.
- d=0.8, the effect is strong.

XLSTAT-Power allows entering directly the effect size.

# Statistical Power to compare variances

### Statistical Power analysis for the comparison of variances in XLSTAT

XLSTAT-Pro includes several tests to compare variances. XLSTAT-Power can calculate the power or the number of observations required for a test based on Fisher's F distribution to compare variances.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H
_{0}and the alternative hypothesis H_{a}. - The statistical test to use.
- The type I error also known as alpha. It occurs when one rejects the null hypothesis when it is true. It is set a priori for each test and is 5%.

The type II error or beta is less studied but is of great importance. In fact, it represents the probability that one does not reject the null hypothesis when it is false. We cannot fix it upfront, but based on other parameters of the model we can try to minimize it. The power of a test is calculated as 1-beta and represents the probability that we reject the null hypothesis when it is false.

We therefore wish to maximize the power of the test. The XLSTAT-Power module calculates the power (and beta) when other parameters are known. For a given power, it also allows to calculate the sample size that is necessary to reach that power.

The statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.

XLSTAT allows you to compare two variances.

### Calculation for the Statistical Power analysis for the comparison of variances

The power of a test is usually obtained by using the associated non-central distribution. In that case, we use the F distribution.

Several hypotheses can be tested, but the most common are the following (two-tailed):

- H
_{0}: The difference between the variances is equal to 0. - H
_{a}: The difference between the variances is different from 0.

The power computation will give the proportion of experiments that reject the null hypothesis. The calculation is done using the F distribution with the ratio of the variances as parameter and the sample sizes – 1 as degrees of freedom.

### Calculating sample size using the statistical power of a test

To calculate the number of observations required, XLSTAT uses an algorithm that searches for the root of a function. It is called the Van Wijngaarden-Dekker-Brent algorithm (Brent, 1973). This algorithm is adapted to the case where the derivatives of the function are not known. It tries to find the root of:

power (N) - expected_power

We then obtain the size N such that the test has a power as close as possible to the desired power.

### Effect size in power calculation

This concept is very important in power calculations. Indeed, Cohen (1988) developed this concept. The effect size is a quantity that will allow calculating the power of a test without entering any parameters but will tell if the effect to be tested is weak or strong.

In the context of comparisons of means, the conventions of magnitude of the effect size are:

- d=0.2, the effect is small.
- d=0.5, the effect is moderate.
- d=0.8, the effect is strong.

XLSTAT-Power allows entering directly the effect size.

# Statistical Power for proportion comparison

### Statistical Power analysis for the comparison of proportions in XLSTAT

XLSTAT-Pro includes parametric tests and nonparametric tests to compare proportions. Thus we can use the z-test, chi-square test, the sign test or the McNemar test. XLSTAT-Power can calculate the power or the number of observations necessary for these tests using either exact methods or approximations.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H
_{0}and the alternative hypothesis H_{a}. - The statistical test to use.
- The type I error also known as alpha. It occurs when one rejects the null hypothesis when it is true. It is set a priori for each test and is 5%.

The type II error or beta is less studied but is of great importance. In fact, it represents the probability that one does not reject the null hypothesis when it is false. We cannot fix it up front, but based on other parameters of the model we can try to minimize it. The power of a test is calculated as 1-beta and represents the probability that we reject the null hypothesis when it is false.

We therefore wish to maximize the power of the test. The XLSTAT-Power module calculates the power (and beta) when other parameters are known. For a given power, it also allows to calculate the sample size that is necessary to reach that power.

The statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.

XLSTAT allows you to compare:

- A proportion to a test proportion (z-test with different approximations).
- Two proportions (z-test with different approximations).
- Proportions in a contingency table (chi-square test).
- Proportions in a nonparametric way (the sign test and the McNemar test)

### Calculations for the Statistical Power of tests comparing proportions

The power of a test is usually obtained by using the associated non-central distribution. For this specific case we will use an approximation in order to compute the power.

#### Comparing a proportion to a test proportion

The alternative hypothesis in this case is: H_{a}: p_{1} – p_{0} ≠ 0

Various approximations are possible:

- Approximation using the normal distribution: In this case, we will use the normal distribution with means p
_{0}and p_{1}and standard deviations √ p_{0}(1- p_{0}) / N and √p_{1}(1- p_{1}) / N - Exact calculation using the binomial distribution with parameters √ p
_{0}(1- p_{0}) / N and √p_{1}(1- p_{1}) / N - Approximation using the beta distribution with parameters ((N-1)p
_{0}; (N-1)(1-p_{0})) and ((N-1)p_{1}; (N-1)(1-p_{1})) - Approximation using the method of the arcsin: This approximation is based on the arcsin transformation of proportions: H
_{(p0)}and H_{(p1)}. The power is obtained using the normal distribution: Z_{p}= √N( H_{(p0)}- H_{(p1)}) – Z_{req}, with Zreq being the alpha-quantile of the normal distribution.

#### Comparing two proportions

The alternative hypothesis in this case is: H_{a}: p_{1} – p_{2} ≠ 0

Various approximations are possible:

- Approximation using the method of the arcsin: This approximation is based on the arcsin transformation of proportions: H
_{(p1)}and H_{(p2)}. The power is obtained using the normal distribution:

Z_{p}= √N( H_{(p1)}- H_{(p2)}) – Z_{req}

with Zreq being the alpha-quantile of the normal distribution. - Approximation using the normal distribution: In this case, we will use the normal distribution with means p
_{1}and p_{2}and standard deviations:

√ p_{1}(1- p_{1}) / N and √ p_{2}(1- p_{2}) / N

#### Chi-square test

To calculate the power of the chi-square test in the case of a contingency table 2 * 2, we use the non-central chi-square distribution with the value of the chi-square as non-centrality parameter.

It therefore seeks to see whether two groups of observations have the same behavior based on a binary variable.

We have:

\ | Group 1 | Group 2 |

Positive | p_{1} |
p_{2} |

Negative | 1-p_{1} |
1-p_{2} |

p_{1} N_{1} and N_{2} have to be entered in the dialog box (p_{2} can be found from other parameters because the test has only one degree of freedom).

#### Sign test

The sign test is used to see if the proportion of cases in each group is equal to 50%. It has the same principle as the one proportion test against a constant, the constant being 0.5. Power is computed using an approximation by the normal distribution or an exact method with the binomial distribution.

We must therefore enter the sample size and the proportion in one group p_{1} (the other proportion is such that p_{2}=1-p_{1}).

#### McNemar test

The McNemar test on paired proportions is a specific case of testing a proportion against a constant. Indeed, one can represent the problem with the following table:

\ | Group 1 | Group 2 |

Positive | PP | PN |

Negative | P | NN |

We have PP + NN + PN + NP = 1. We want to try to see the effect of a treatment; we are therefore interested in NP and PN. The other values are not significant.

The test inputs are: Proportion1= NP and Proportion 2 = PN, with necessarily P1+P2 < 1.

The effect is calculated only on a percentage of NP + PN of the sample. The proportion of individuals that change from negative to positive is calculated as NP / (NP + NP). So we will try to compare this figure to a value of 50% to see if we have more individuals who go from positive to negative than individuals who go from negative to positive.

This test is well suited for medical applications.

### Calculating sample size using the statistical power of a test

To calculate the number of observations required, XLSTAT uses an algorithm that searches for the root of a function. It is called the Van Wijngaarden-Dekker-Brent algorithm (Brent, 1973). This algorithm is adapted to the case where the derivatives of the function are not known. It tries to find the root of:

power (N) - expected_power

We then obtain the size N such that the test has a power as close as possible to the desired power.

# Statistical Power for comparing correlations

### Statistical Power of correlation comparison tests

XLSTAT-Pro offers a test to compare correlations. XLSTAT-Power can calculate the power or the number of observations necessary for this test.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H
_{0}and the alternative hypothesis H_{a}. - The statistical test to use.

The type II error or beta is less studied but is of great importance. In fact, it represents the probability that one does not reject the null hypothesis when it is false. We cannot fix it up front, but based on other parameters of the model we can try to minimize it. The power of a test is calculated as 1-beta and represents the probability that we reject the null hypothesis when it is false.

We therefore wish to maximize the power of the test. The XLSTAT-Power module calculates the power (and beta) when other parameters are known. For a given power, it also allows to calculate the sample size that is necessary to reach that power.

The statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.

XLSTAT allows you to compare:

- One correlation to 0.
- One correlation to a constant.
- Two correlations.

### Calculations for the Statistical Power of tests comparing correlations

The power of a test is usually obtained by using the associated non-central distribution. For this specific case we will use an approximation in order to compute the power.

#### Statistical Power for comparing one correlation to 0

The alternative hypothesis in this case is: H_{ a}: r ≠ 0

The method used is an exact method based on the non-central Student distribution.

The non-centrality parameter used is the following: NCP = √ r²/(1-r²)* √N

The part r²/(1-r²) is called effect size.

#### Statistical Power for comparing one correlation to a constant

The alternative hypothesis in this case is: H_{a}: r ≠ r_{0}.

The power calculation is done using an approximation by the normal distribution. We use the Fisher Z-transformation: Z_{r} = ½ log[(1+r)/(1-r)].

The effect size is: Q = |Z_{r} - Z_{r0}|.

The power is then found using the area under the curve of the normal distribution to the left of Z_{p}: Z_{p} = Q * √N - 3 - Z_{req}

where Z_{req} is the quantile of the normal distribution for alpha.

#### Statistical Power for comparing two correlations

The alternative hypothesis in this case is: H_{a}: r_{1} – r_{2} ≠ 0.

The power calculation is done using an approximation by the normal distribution. We use the Fisher Z-transformation: Z_{r} = ½ log[(1+r)/(1-r)].

The effect size is: Q = |Z_{r1} - Z_{r2}|.

The power is then found using the area under the curve of the normal distribution to the left of Z_{p}: Z_{p} = Q * √(N’ – 3)/2 - Z_{req}

where Z_{req} is the quantile of the normal distribution for alpha and N’ = [2*(N_{1} - 3)*(N_{2} - 3)]/[N_{1} + N_{2} - 6] + 3.

### Calculating sample size for a correlation comparison test

power (N) - expected_power

We then obtain the size N such that the test has a power as close as possible to the desired power.

### Effect size for correlation comparison tests

This concept is very important in power calculations. Indeed, Cohen (1988) developed this concept. The effect size is a quantity that will allow to calculate the power of a test without entering any parameters but will tell if the effect to be tested is weak or strong.

In the context of comparisons of correlations conventions of magnitude of the effect size are:

- Q=0.1, the effect is small.
- Q=0.3, the effect is moderate.
- Q=0.5, the effect is strong.

XLSTAT-Power allows you to enter directly the effect size.

# Linear regression

View a tutorial

View a tutorial

### Statistical Power for linear regression

XLSTAT-Pro offers a tool to apply a linear regression model. XLSTAT-Power estimates the power or calculates the necessary number of observations associated with variations of R ² in the framework of a linear regression.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H
_{0}and the alternative hypothesis H_{a}. - The statistical test to use.

The type II error or beta is less studied but is of great importance. In fact, it represents the probability that one does not reject the null hypothesis when it is false. We cannot fix it up front, but based on other parameters of the model we can try to minimize it. The power of a test is calculated as 1-beta and represents the probability that we reject the null hypothesis when it is false.

We therefore wish to maximize the power of the test. The XLSTAT-Power module calculates the power (and beta) when other parameters are known. For a given power, it also allows to calculate the sample size that is necessary to reach that power.

The statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.

XLSTAT allows you to compare:

- R² value to 0.
- Increase in R² value when new predictors are added to the model to 0.

It means testing the following hypothesis:

- H
_{0}: R² is equal to 0 / H_{a}: R² is different from 0 - H
_{0}: Increase in R² is equal to 0 / H_{a}: Increase in R² is different from 0.

### Effect size for the variation of R² in linear regression

This concept is very important in power calculations. Indeed, Cohen (1988) developed this concept. The effect size is a quantity that will allow you to calculate the power of a test without entering any parameters but will tell if the effect to be tested is weak or strong.

In the context of a linear regression, conventions of magnitude of the effect size are:

- f²=0.02, the effect is small.
- f²=0.15, the effect is moderate.
- f²=0.35, the effect is strong.

XLSTAT-Power allows you to enter directly the effect size but also allows entering parameters of the model that will help calculating the effect size. We detail the calculations below:

- Using variances: We can use the variances of the model to define the size of the effect. With var
_{Explained}being the variance explained by the explanatory variables that we wish to test and var_{Error}being the variance of the error or residual variance, we have:

f² = var_{Explained}/ var_{Error}. - Using the R² (in the case H
_{0}: R²=0): We enter the estimated square multiple correlation value (called rho²) to define the size of the effect. We have:

f² = ρ² / (1 - ρ) - Using the partial R² (in the case H
_{0}: Increase in R²=0): We enter the partial R² that is the expected difference in R² when adding predictors to the model to define the size of the effect. We have:

f² = R_{part}² / (1 - R_{part}²) - Using the correlations between predictors (in the case
_{0}: R²=0): One must then select a vector containing the correlations between the explanatory variables and the dependent variable Corr_{Y}, and a square matrix containing the correlations between the explanatory variables Corr_{X}. We have:

f² = Corr_{Y}^{T}* Corr_{X}^{-1}* Corr_{Y}/ (1 - = Corr_{Y}^{T}* Corr_{X}^{-1}* Corr_{Y})

Once the effect size is defined, power and necessary sample size can be computed.

### Calculations of the Statistical Power for changes in R² in linear regression

The power of a test is usually obtained by using the associated non-central distribution. For this specific case we will use the Fisher non-central distribution to compute the power.

The power of this test is obtained using the non-central Fisher distribution with degrees of freedom equal to: DF1 is the number of tested variables; DF2 is the sample size from which the total number of explanatory variables included in model plus one is subtracted and the non-centrality parameter is: NCP = f²N.

### Calculating sample size for changes in R² in linear regression

power (N) - expected_power

We then obtain the size N such that the test has a power as close as possible to the desired power.

# Statistical Power for ANOVA / ANCOVA / Repeated measures ANOVA

### Statistical Power for ANOVA, ANCOVA and Repeated measures ANOVA

XLSTAT-Pro offers tools to apply analysis of variance (ANOVA), repeated measures analysis of variance and analysis of covariance (ANCOVA). XLSTAT-Power estimates the power or calculates the necessary number of observations associated with these models.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H
_{0}and the alternative hypothesis H_{a}. - The statistical test to use.

The type II error or beta is less studied but is of great importance. In fact, it represents the probability that one does not reject the null hypothesis when it is false. We cannot fix it up front, but based on other parameters of the model we can try to minimize it. The power of a test is calculated as 1-beta and represents the probability that we reject the null hypothesis when it is false.

We therefore wish to maximize the power of the test. The XLSTAT-Power module calculates the power (and beta) when other parameters are known. For a given power, it also allows to calculate the sample size that is necessary to reach that power.

The statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.

XLSTAT can therefore test:

- In the case of a one-way ANOVA or more fixed factors and interactions, as well as in the case of ANCOVA:
- H
_{0}: The means of the groups of the tested factor are equal. - H
_{a}: At least one of the means is different from another.

- H
- In the case of repeated measures ANOVA for a within-subjects factor:
- H
_{0}: The means of the groups of the within subjects factor are equal. - H
_{a}: At least one of the means is different from another.

- H
- In the case of repeated measures ANOVA for a between-subjects factor:
- H
_{0}: Les The means of the groups of the between subjects factor are equal. - H
_{a}: At least one of the means is different from another.

- H
- In the case of repeated measures ANOVA for an interaction between a within-subjects factor and a between-subjects factor:
- H
_{0}: The means of the groups of the within-between subjects interaction are equal. - H
_{a}: At least one of the means is different from another:

- H

### Effect size for ANOVA, ANCOVA and Repeated measures ANOVA

This concept is very important in power calculations. Indeed, Cohen (1988) developed this concept. The effect size is a quantity that will allow calculating the power of a test without entering any parameters but will tell if the effect to be tested is weak or strong.

In the context of an ANOVA-type model, conventions of magnitude of the effect size are:

- f=0.1, the effect is small.
- f=0.25, the effect is moderate.
- f=0.4, the effect is strong.

XLSTAT-Power allows you to enter directly the effect size but also allows you to enter parameters of the model that will calculate the effect size. We detail the calculations below:

- Using variances: We can use the variances of the model to define the size of the effect. With var
_{explained}being the variance explained by the explanatory factors that we wish to test and var_{error}being the variance of the error or residual variance, we have:

f = √var_{explained}/ var_{error} - Using the direct approach: We enter the estimated value of eta² which is the ratio between the explained variance by the studied factor and the total variance of the model. For more details on eta², please refer to Cohen (1988, chap. 8.2). We have:

f = √η² / (1 – η²) - Using the means of each group (in the case of one-way ANOVA or within subjects repeated measures ANOVA): We select a vector with the averages for each group. It is also possible to have groups of different sizes, in this case, you must also select a vector with different sizes (the standard option assumes that all groups have equal size). We have:

f = √Σ_{i}(m_{i}- m)² / k / SD_{intra}

with m_{i}mean of group i, m mean of the means, k number of groups and SD_{intra}within-group standard deviation. - When an ANCOVA is performed, a term has to be added to the model in order to take into account the quantitative predictors. The effect size is then multiplied by

f = √1 / (1 – ρ²)

where ρ² is the theoretical value of the square multiple correlation coefficient associated to the quantitative predictors.

Once the effect size is defined, power and necessary sample size can be computed.

### Calculations for the Statistical Power of ANOVA, ANCOVA and Repeated measures ANOVA

The power of a test is usually obtained by using the associated non-central distribution. For this specific case we will use the Fisher non-central distribution to compute the power.

We first introduce some notations:

- NbGroup: Number of groups we wish to test.
- N: sample size.
- NumeratorDF: Numerator degrees of freedom for the F distribution (see below for more details).
- NbRep: Number of repetition (measures) for repeated measures ANOVA.
- ρ: Correlation between measures for repeated measures ANOVA.
- ε: Geisser-Greenhouse non sphericity correction.
- NbPred: Number of predictors in an ANCOVA model.

For each method, we give the first and second degrees of freedom and the non-centrality parameter:

- One-way ANOVA:

DF1 = NbGroup – 1; DF2 = N – NbGroup; NCP = f²N - ANOVA with fixed effects and interactions:

DF1 = NumeratorDF; DF2 = N – NbGroup; NCP = f²N - Repeated measures ANOVA within-subjects factor:

DF1 = NbRep – 1; DF2 = (N – NbGroup)(NbRep – 1); NCP = f²*N*NbRep*ε / (1 – ρ) - Repeated measures ANOVA between-subjects factor:

DF1 = NbGroup – 1; DF2 = N – NbGroup; NCP = f²*N*NbRep / [1 + ρ(NbRep – 1)] - Repeated measures ANOVA interaction between a within-subject factor and a between-subject factor:

DF1 = (NbRep – 1)(NbGroup – 1); DF2 = (N – NbGroup)(NbRep – 1); NCP = f²*N*NbRep*ε / (1 – ρ) - ANCOVA:

DF1 = NumeratorDF; DF2 = N – NbGroup – NbPred – 1; NCP = f²N

### Calculating sample size for ANOVA, ANCOVA and Repeated measures ANOVA taking into account the statistical power

power (N) - expected_power

We then obtain the size N such that the test has a power as close as possible to the desired power.

### Numerator degrees of freedom for ANOVA, ANCOVA and Repeated measures ANOVA

In the framework of an ANOVA with fixed factor and interactions or an ANCOVA; XLSTAT-Power proposes to enter the number of degrees of freedom for the numerator of the non-central F distribution. This is due to the fact that many different models can be tested and computing numerator degrees of freedom is a simple way to test all kind of models.

Practically, the numerator degrees of freedom is equal to the number of group associated to the factor minus one in the case of a fixed factor. When interactions are studied, it is equal to the product of the degrees of freedom associated to each factor included in the interaction.

Suppose we have a 3-factor model, A (2 groups), B (3 groups), C (3 groups), 3 second order interactions A*B, A*C and B*C and one third-order interaction A*B*C We have 3*3*2=18 groups.

To test the main effects A, we have: NbGroups=18 and NumeratorDF=(2-1)=1.

To test the interactions, eg A*B, we have NbGroups=18 and NumeratorDF=(2-1)(3-1)=2. If you wish to test the third order interaction (A*B*C), we have NbGroups=18 and NumeratorDF=(2-1)(3-1)(3-1)=4.

In the case of an ANCOVA, the calculations will be similar.

# Logistic regression

### Statistical Power for Logistic regression

XLSTAT-Pro offers a tool to apply logistic regression. XLSTAT-Power estimates the power or calculates the necessary number of observations associated with this model.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H
_{0}and the alternative hypothesis H_{a}. - The statistical test to use.

The statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.

In the general framework of logistic regression model, the goal is to explain and predict the probability P that an event appends (usually Y=1). P is equal to:

P = exp(β_{0} + β_{1}X_{1} + … + β_{k}X_{k}) / [1 + exp(β_{0} + β_{1}X_{1} + … + β_{k}X_{k}) ]

We have: log(P/(1-P)) = β_{0} + β_{1}X_{1} + … + β_{k}X_{k}

The test used in XLSTAT-Power is based on the null hypothesis that the β_{1} coefficient is equal to 0. That means that the X_{1} explanatory variable has no effect on the model.

The hypothesis to be tested is:

- H
_{0}: β1 = 0 - H
_{a}: β1 ≠ 0

### Calculation of the statistical power for logistic regression

Power is computed using an approximation which depends on the type of variable.

If X_{1} is quantitative and has a normal distribution, the parameters of the approximation are:

- P
_{0}(baseline probability): The probability that Y=1 when all explanatory variables are set to their mean value. - P
_{1}(alternative probability): The probability that X_{1}be equal to one standard error above its mean value, all other explanatory variables being at their mean value. - Odds ratio: The ratio between the probability that Y=1, when X
_{1}is equal to one standard deviation above its mean and the probability that Y=1 when X_{1}is at its mean value. - The R² obtained with a regression between X
_{1}and all the other explanatory variables included in the model.

If X_{1} is binary and follow a binomial distribution. Parameters of the approximation are:

- P
_{0}(baseline probability): The probability that Y=1 when X1=0. - P
_{1}(alternative probability): The probability that Y=1 when X_{1}=1. - Odds ratio: The ratio between the probability that Y=1, when X
_{1}=1 and the probability that Y=1 when X_{1}=0. - The R² obtained with a regression between X
_{1}and all the other explanatory variables included in the model. - The percentage of observations with X
_{1}1.

These approximations depend on the normal distribution.

### Calculating sample size for logistic regression taking into account the statistical power

power (N) - expected_power

We then obtain the size N such that the test has a power as close as possible to the desired power.

# Cox Model

### Statistical Power for Cox model

XLSTAT-Life offers a tool to apply the proportional hazards ratio Cox regression model. XLSTAT-Power estimates the power or calculates the necessary number of observations associated with this model.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H
_{0}and the alternative hypothesis H_{a}. - The statistical test to use.

The statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.

The Cox model is based on the hazard function which is the probability that an individual will experience an event (for example, death) within a small time interval, given that the individual has survived up to the beginning of the interval. It can therefore be interpreted as the risk of dying at time t. The hazard function (denoted by l(t,X)) can be estimated using the following equation:

λ_{(t,X)} = λ_{0 (t)} exp(β_{1}X_{1} + … + (β_{p}X_{p})

The first term depends only on time and the second one depends on X. We are only interested by the second term. If all β_{i} are equal to zero then there is no hazard factor. The goal of the Cox model is to focus on the relations between the β_{i}s and the hazard function.

The test used in XLSTAT-Power is based on the null hypothesis that the β_{1} coefficient is equal to 0. That means that the X_{1} covariate is not a hazard factor.

The hypothesis to be tested is:

- H
_{0}: β_{1}= 0 - H
_{a}: β_{i}≠ 0

Power is computed using an approximation which depends on the normal distribution. Other parameters used in this approximation are: the event rate, which is the proportion of uncensored individuals, the standard deviation of X_{1}, the expected value of β_{1} known as B(log(hazard ratio)) and the R² obtained with the regression between X_{1} and the other covariates included in the Cox model.

### Calculating sample size for the Cox model taking into account the statistical power

power (N) - expected_power

We then obtain the size N such that the test has a power as close as possible to the desired power.

### Calculating B for the Cox model

The B(log(hazard ratio)) is an estimation of the coefficient β_{1} of the following equation:

log(λ_{(t|X)} / λ_{0 (t)}) = β_{1}X_{1} + … + β_{k}X_{k}

β_{1} is the change in logarithm of the hazard ratio when X_{1} is incremented of one unit (all other explanatory variables remaining constant). We can use the hazard ratio instead of the log. For a hazard ratio of 2, we will have B = ln(2) = 0.693.

# Sample size for clinical trials

**
View a tutorial**XLSTAT-Power enables you to compute the necessary sample size for a clinical trial.

Three types of trials can be studied:

**Equivalence trials:**An equivalence trial is where you want to demonstrate that a new treatment is no better or worse than an existing treatment.**Superiority trials:**A superiority trial is one where you want to demonstrate that one treatment is better than another.**Non-inferiority trials:**A non-inferiority trial is one where you want to show that a new treatment is not worse than an existing treatment.

These tests can be applied to a binary outcome or a continuous outcome.

When testing a hypothesis using a statistical test, there are several decisions to take:

- The null hypothesis H0 and the alternative hypothesis Ha.
- The statistical test to use.

The type II error or beta is less studied but is of great importance. In fact, it represents the probability that one does not reject the null hypothesis when it is false. We can not fix it upfront, but based on other parameters of the model we can try to minimize it. The power of a test is calculated as 1-beta and represents the probability that we reject the null hypothesis when it is false.

We therefore wish to maximize the power of the test. The XLSTAT-Power module calculates the power (and beta) when other parameters are known. For a given power, it also allows to calculate the sample size that is necessary to reach that power. The usual power used in 0.9, however, it can differ depending on the trial.

The sample size requirements or the statistical power calculations are usually done before the experiment is conducted. The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment.