Jump to ratings and reviews
Rate this book

Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests

Rate this book
This book presents a simple and general method for conducting statistical power analysis based on the widely used F statistic. The book illustrates how these analyses work and how they can be applied to problems of studying design, to evaluate others' research, and to choose the appropriate criterion for defining "statistically significant" outcomes. Statistical Power Analysis examines the four major applications of power analysis, concentrating on how to

*the sample size needed to achieve desired levels of power;

*the level of power that is needed in a study;

*the size of effect that can be reliably detected by a study; and

*sensible criteria for statistical significance.


Highlights of the second edition a CD with an easy-to-use statistical power analysis program; a new chapter on power analysis in multi-factor ANOVA, including repeated-measures designs; and a new One-Stop PV Table to serve as a quick reference guide.


The book discusses the application of power analysis to both traditional null hypothesis tests and to minimum-effect testing. It demonstrates how the same basic model applies to both types of testing and explains how some relatively simple procedures allow researchers to ask a series of important questions about their research. Drawing from the behavioral and social sciences, the authors present the material in a nontechnical way so that readers with little expertise in statistical analysis can quickly obtain the values needed to carry out the power analysis.


Ideal for students and researchers of statistical and research methodology in the social, behavioral, and health sciences who want to know how to apply methods of power analysis to their research.

128 pages, Hardcover

First published August 1, 1995

2 people are currently reading
38 people want to read

About the author

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
8 (57%)
3 stars
4 (28%)
2 stars
2 (14%)
1 star
0 (0%)
Displaying 1 - 2 of 2 reviews
110 reviews16 followers
October 31, 2016
tl;dr - fix the null hypothesis, use confidence intervals, and avoid moderated relationships

The null hypothesis has always bothered me, principally because you claim to be testing for a thing you hope to not find. This book lays out a more sinister problem: the odds of "no change" are so vanishingly small that your ability to find a 'significant' deviation from that null is easily created by simply collecting more samples. Moving to a distinction between nil (zero change) and null (insignificant change, defined by the authors as <1%) does change the statistics by forcing use of the noncentral F distribution, but it finally allows the null hypothesis to actually test something.

All of this is in the context of Type I and Type II errors. When you use the traditional null hypothesis, your chances of actually having "no change" are so small that the risk of a Type I error was already essentially zero, and yet most statistical methods revolve around helping and hoping to prevent it. The Type II error has been largely ignored, although the last 20 years have shown increasing awareness of the risk associated with the error of not finding an effect that actually does exist. This book lays the necessary groundwork to understand statistical power in a largely conversational fashion.

Cut to the chase: {alpha} is your willingness to accept a Type I, and is 0.05 by convention. {beta} is your willingness to accept a Type II, and is 0.2 by convention, giving a power of 0.8. Change those if you want, but it will make for an uphill battle to be accepted by peer review and won't actually change the risks as much as you imagine. Then, report the value (versus the nil or the null hypothesis) with a confidence interval so that the amount of change can be seen in contrast to the 'no change' effect.

At the end of the day, the real problem comes down to the size of the population being studied. If you're looking at over 1000 data points, you're almost certainly fine. If you're looking at less than 100, you're almost certainly at risk. I was hoping to learn something beyond the "just collect more" perspective of my Stat Mech class, but apparently state of the art isn't there yet.
Displaying 1 - 2 of 2 reviews

Can't find what you're looking for?

Get help and learn more about the design.