Menu location: Analysis_Meta-Analysis_Effect Size.
Case-control studies of continuous outcomes (e.g. serum creatinine) may be investigated with respect to overall size of effect of an intervention. Meta-analysis may be used to investigate the combination or interaction of a group of independent studies, for example a series of effect sizes from similar studies conducted at different centres. This StatsDirect function examines the effect size within each stratum and across all of the studies/strata.
There are a number of statistical methods for estimating effect size, StatsDirect uses g (modified Glass statistic with pooled sample standard deviation) and the unbiased estimator d (Hedges and Olkin, 1985):
- where ne is the number in the experimental group, nc is the number in the control group, μe is the sample mean of the experimental group, μc is the sample mean of the control group, σe is the sample standard deviation for the experimental group, σc is the sample standard deviation for the control group, N = ne + nc, J(m) is the correction factor given m and Γ is the gamma function.
For each study StatsDirect gives g with an exact confidence interval and d with an approximate confidence interval. An iterative method based on the non-central t distribution is used to construct the confidence interval for g (Hedges and Olkin, 1985).
The pooled mean effect size estimate (d+) is calculated using direct weights defined as the inverse of the variance of d for each study/stratum. An approximate confidence interval for d+ is given with a chi-square statistic and probability of this pooled effect size being equal to zero (Hedges and Olkin, 1985).
StatsDirect also gives the option to base effect size calculations on weighted mean difference (a non-standardized estimate unlike g and d) as described in the Cochrane Collaboration Handbook (Mulrow and Oxman, 1996).
The inconsistency of results across studies is summarised in the I² statistic, which is the percentage of variation across studies that is due to heterogeneity rather than chance – see the heterogeneity section for more information.
Please note that the results from StatsDirect may be slightly different from the results you obtain using other packages or from those quoted in papers; this is due to the use of exact bias correction calculated from the gamma distribution in StatsDirect.
You may enter number, mean and standard deviation for control and experimental groups of each study. Alternatively you may just enter numbers in experimental groups, numbers in control groups and effect size g (nb. please make sure you use g only as defined above!).
From personal communication from Dr N. Freemantle.
The following data represent test outcomes for six studies in which an educational intervention was investigated:
To analyse these data in StatsDirect first prepare them in four workbook columns and label these columns appropriately. Alternatively, open the test workbook using the file open function of the file menu. Then select effect size from the meta-analysis section of the analysis menu, select the option to use mean, n and sd, and then select the columns ’Exptal. number’, ’Exptal. mean’, Exptal. SD’, ’Control number’, ’Control mean’, ’Control SD’ and ’Trial’ as prompted.
For this example:
|Study||J(N-2)||g||Exact 95% CI|
|Study||N (exptal.)||N (control)||d||Approximate 95% CI|
Fixed effects (Hedges-Olkin)
Pooled effect size d+ = 0.612354 (95% CI = 0.421251 to 0.803457)
Z (test d+ differs from 0) = 6.280333 P < 0.0001
Non-combinability of studies
Cochran Q = 7.737692 (df = 6) P = 0.258
Moment-based estimate of between studies variance = 0.020397
I² (inconsistency) = 22.5% (95% CI = 0% to 67.1%)
Random effects (DerSimonian-Laird)
Pooled d+ = 0.627768 (95% CI = 0.403026 to 0.85251)
Z (test d+ differs from 0) = 5.474734 P < 0.0001
Begg-Mazumdar: Kendall's tau = 0.238095 P = 0.5619 (low power)
Egger: bias = 1.439076 (95% CI = -2.5809 to 5.459051) P = 0.3997
Here we can say with 95% confidence, assuming a random effects model, that the true size of the effect was at least 0.4 greater for the group who received the educational intervention compared with those who did not. Assuming a fixed effects model a slightly stronger inference could be made about an effect size of 0.42 (the lower confidence limit) but the high inter-study variation makes the fixed effects model less appropriate.