StatsDirect has been designed to be exceptionally easy to use and contains example-rich help. But for those wishing to have more statistical instruction in the form of a book, please see Statistical Testing in Practice with StatsDirect, by Cole Davis: http://www.coledavis.org/book.html
Written in accessible English without using mathematical formulae, this book introduces statistical testing and takes the reader from simple analyses of differences all the way to multiple regression, factor analysis and survival analysis. The statistics software cited in the text is cheaper and, crucially, easier to use than software such as SPSS, offering an escape route for postgraduates experiencing difficulties on research projects. The book caters for students and professionals in health, social sciences, marketing and other fields. Unlike many statistics texts, it avoids irrelevant themes and mathematical proofs, instead building up the reader’s knowledge in easy steps, with clear worked examples and practical advice on what to do. Readers who learn statistics with this book, however, should then be able to progress to other software as and when they choose to do so.
Please note that the above-mentioned book has been written by a third party and is published independently of StatsDirect. StatsDirect bears no responsibility for any errors or omissions.
StatsDirect is written for Microsoft Windows but can run on a Mac through virtual Windows.
Many users run StatsDirect on a Mac with Microsoft Windows installed using Vmware Fusion or Parallels Desktop.
Please ensure you have applied all updates to the version of Windows you are running on your Mac before installing StatsDirect.
The theoretical basis of the methods used in StatsDirect should be cited as listed in the reference section of the help system in StatsDirect.
A doctoral thesis describing the scientific foundations of StatsDirect is at http://www.statsdirect.com/thesis/md.pdf
Download and run the sdxlremover.exefile if you need to remove the StatsDirect 2 add-in for Excel.
For either StatsDirect version 2 or 3 go to the Tools_Setup Tools menu and click the Excel On / Off option to Off.
Total = 30, response = 0 Proportion = 0
Exact (Clopper-Pearson) 95% confidence interval = 0 to 0.115703
Approximate (Wilson) 95% mid P confidence interval = 0 to 0.113513
In reply to a specific question about Neuman-Keul's test: this is one of the methods
of multiple comparison that tries to build in "conservativeness" in order
to avoid the type I error that can be associated with dredging your data for differences.
It is a controversial area: Peter Armitage gives an excellent discussion of such
methods and provides examples in:
P. Armitage & G. Berry, Statistical Methods in Medical Research, Blackwell 1994.
Miller R. G. (jnr.), Simultaneous Statistical Inference, (2nd edition) Springer-Verlag
Hsu J.C., Multiple Comparisons. Chapman and Hall, 1996
TEST:+ a (true +ve) h3 (false +ve)
- c (false -ve) d (true -ve)
Sensitivity = a/(a+c)
Specificity = d/(h3+d)
So you can use the population survey sample size calculation for the target
sensitivity% or specificity% within a specified tolerance and probability of being
wrong (i.e. not within that tolerance).
Some useful references are:
Ware JH. Linear models for the analysis of longitudinal studies. The American Statistician
Davis CS. A computer program for non-parametric analysis of incomplete repeated
measures for two samples. Computer Methods and Programs in Biomedicine 1994;42:39-52.
David CS, Hall DB. A computer program for the regression analysis of ordered categorical
repeated measurements. Computer Methods and Programs in Biomedicine 1995;51:153-169.
StatsDirect uses very precise calculation methods in order to keep calculation error
to a minimum.
In previous discussion archives, Dr Barry Tennison gave the following easy to understand
explanation of rounding error:
"Inside all (normal) computers, numbers are represented in binary (in various
forms like fixed point bor floating point). Since one cannot include an infinite
number of bits (binary digits) after the decimal point, the only numbers that are
represented exactly are those that can be expressed as a fraction with denominator
a power of two (just as the only terminating decimals are those expressible as a
fraction with denominator a power of ten). For example one third (1/3) cannot be
expressed as a terminating (finite) decimal or binary number. Therefore the INTERNAL
forms of numbers like this represent the intended (exact) numbers only approximately.
The apparent rounding errors result from this, rather than from any inaccuracy of
The following is a more detailed introduction to this subject:
Numerical precision and error
"Although this may seem a paradox, all exact science is dominated by the idea
Russell, Bertrand (1872-1970)
Numbers with fractional parts (real/floating-point as opposed to integer/fixed-point
numbers) cannot all be fully represented in binary computers because computers cannot
hold an infinite number of bits (binary digits) after the decimal point. The
only real numbers that are represented exactly are those that can be expressed as
a fraction with denominator that is a power of two (e.g. 0.25); just as the only
terminating (finite) decimals are those expressible as a fraction with denominator
that is a power of ten (e.g. 0.1). Many real numbers, one third for example,
cannot be expressed as a terminating decimal or binary number. Binary computers
therefore represent many real numbers in approximate form only, the global standard
for doing this is IEEE Standard Floating-Point Representation (IEEE, 1985).
Numerical algorithms written for the Microsoft .Net platform comply with IEEE Standard Floating-Point Representation. All real numbers in StatsDirect are handled
in double precision.
Arithmetic among floating point numbers is subject to error. The smallest
floating point number which, when added to 1.0, produces a floating-point number
different to 1.0 is termed the machine accuracy em
(Press et al., 1992). In IEEE double precision
em is approximately 2.22 ´
10-16. Most arithmetic operations among floating point numbers
produce a so-called round-off error of at least em.
Some round-off errors are characteristically large, for example the subtraction
of two almost equal numbers. Round-off errors in a series of arithmetic operations
seldom occur randomly up and down. Large round-off error at the beginning
of a series of calculations can become magnified such that the result of the series
is substantially imprecise, a condition known as instability. Algorithms in
StatsDirect were assessed for likely causes of instability and common stabilising
techniques, such as leaving division as late as possible in calculations, were employed.
Another error inherent to numerical algorithms is the error associated with approximation
of functions; this is termed truncation error (Press et al., 1992). For example,
integration is usually performed by calculating a function at a large discrete number
of points, the difference between the solution obtained in this practical manner
and the true solution obtained by considering every possible point is the truncation
error. Most of the literature on numerical algorithms is concerned with minimisation
of truncation error. For each function approximation in StatsDirect, the most
precise algorithms practicable were written in the light of relevant, current literature.
IEEE Standard for Binary Floating Point Numbers,
ANSI/IEEE Std 754. New York: Institute
of Electrical and Electronics Engineers (IEEE) 1985.
Press WH, et al. Numerical Recipes, The Art of Scientific Computing (2nd edition).
Cambridge University Press 1992.