7.4 Eta-squared
Time to go over a new effect size measure! This time, we use eta-squared (\(\eta^2\)) as an effect size for ANOVAs.
7.4.1 Eta-squared and variance
At the start of this module, we introduced the concept of how ANOVA partitions total variance into both between and within-subject variance:
\[ Variance_{total} = Variance_{between} + Variance_{within} \]
This partitioning allows for a simple but important way of calculating effect sizes for ANOVAs. If we’re interested in the effect of our IV (group, which is between-subject variance), we simply need to know how much of the total variance is explained by this variable.
This is eta-squared (\(\eta^2\)), our effect size for ANOVAs. Eta-squared gives us a percentage of how much variance in the DV can be attributed to the IV. So an eta-squared of .778 means that 77.8% of the total variance in the outcome variable/DV is because of the IV variable/main effect. It is calculated as follows:
\[ \eta^2 = \frac{SS_{effect}}{SS_t} \]
In other words, we divide the sum of squares for the main effect by the total sum of squares. For repeated-measures ANOVAs, the formula is the same in practice (i.e. divide the SS in the top row in the ANOVA table by the total).
A related version of this is partial eta-squared (\(\eta^2_p\)). This describes the effect of the IV after accounting for the variance explained by other factors. This is not so relevant for one-way designs - regular and partial eta-squared will give the same answer - but becomes more relevant when you move into factorial designs where you have more than one IV.
The formula for partial eta-squared is:
\[ \eta^2_p = \frac{SS_{effect}}{SS_{effect} + SS_{error}} \]
Where \(SS_{error}\) is the sums of squares for the error/residual term in an ANOVA. In general, it’s good practice to default to at least reporting partial eta-squared. Again, in a one-way ANOVA this will give the same answer as regular eta-squared, but in a factorial design (where you have more than one IV) it will give a more precise estimate of each variable’s effect size.
One more variant you will see is generalised eta-squared \(\eta^2_g\), which is similar to the partial variant. While partial eta-squared is great, it is sensitive to design - in other words, what variables are included in the analysis will influence the calculation of \(\eta^2_p\), meaning that it is only really comparable across studies of similar design. However, not every design will manipulate every predictor (e.g. gender), and so generalised eta-squared (\(\eta^2_G\)) can handle this. This effect size is best used in meta-analyses.
To calculate eta-squared using R, there are two ways as per usual. The first is by using tthe anova_test() function in rstatix. By default, anova_test() (and other ANOVA-related packages in R) will calculate generalised eta squared (labelled ges in the output). If you want partial eta-squared, you just need to give anova_test() the extra argument effect.size = "pes" as follows:
## ANOVA Table (type III tests)
##
## $ANOVA
## Effect DFn DFd F p p<.05 pes
## 1 time 2 188 132.625 1.19e-36 * 0.585
##
## $`Mauchly's Test for Sphericity`
## Effect W p p<.05
## 1 time 0.939 0.054
##
## $`Sphericity Corrections`
## Effect GGe DF[GG] p[GG] p[GG]<.05 HFe DF[HF] p[HF]
## 1 time 0.943 1.89, 177.21 1.05e-34 * 0.961 1.92, 180.72 2.45e-35
## p[HF]<.05
## 1 *
Note that as mentioned above, partial eta-squared and regular eta-squared are equivalent in a one-way ANOVA of any kind, so using this is ok.
Otherwise, our trusty effectsize package provides functions for calculating effect sizes in R. We use a function called - you guessed it - eta_squared(). This function works in very much the same way as the other effectsize family of functions do - you need to give them an anova model to calculate effect sizes for.
Here is an example with a one-way ANOVA, which we will look at on the next page:
## For one-way between subjects designs, partial eta squared is equivalent
## to eta squared. Returning eta squared.
7.4.2 An example
Here’s the ANOVA table we worked out by hand in Section 9.3:
| Sums of squares (SS) | Degrees of freedom (df) | Mean square (MS) | F | p | |
|---|---|---|---|---|---|
| Group (our effect) | 42 | 2 | 21 | 15.75 | |
| Error/Residual | 12 | 9 | 1.33 | ||
| Total | 54 |
If we were to calculate an eta-squared value for this, we would use SS effect (42) and SSt (54) like so:
\[ \eta^2 = \frac{42}{54} = .778 \]
7.4.3 Interpreting eta-squared
Cohen (1988) provided the following guidelines for interpreting eta-squared:
- \(\eta^2\) = .01 is a small effect
- \(\eta^2\) = .06 is a medium effect
- \(\eta^2\) = .14 is a large effect
With these guidelines (which aren’t perfect), a \(\eta^2\) of .778 is astronomically huge.
Alternatively, when it comes to eta-squared you can avoid using benchmarks entirely and interpret them as the amount of variance (as a percentage) explained in the DV by the IV, as described above.