Jump to ratings and reviews
Rate this book

Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests, Third Edition

Rate this book
Noted for its accessible approach, this bestseller applies power analysis to both null hypothesis and minimum-effect testing using the same basic model. Through the use of a few relatively simple procedures and examples from the behavioral and social sciences, the authors show readers with little expertise in statistical analysis how to quickly obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to understand problems of study design, to evaluate research, and to choose the appropriate criterion for defining "statistically significant" outcomes are sprinkled throughout. The book presents a simple and general model for statistical power analysis that is based on the F statistic. Statistical Power Analysis reviews how to The third edition Ideal for students and researchers in the social, behavioral, and health sciences, business, and education, this valuable resource helps readers apply methods of power analysis to their research. PV and F tables serve as a quick reference. More details - plus a link to download the One Stop F Calculator - can be found at .

224 pages, Hardcover

First published August 1, 1995

2 people are currently reading
38 people want to read

About the author

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
8 (57%)
3 stars
4 (28%)
2 stars
2 (14%)
1 star
0 (0%)
Displaying 1 - 2 of 2 reviews
110 reviews16 followers
October 31, 2016
tl;dr - fix the null hypothesis, use confidence intervals, and avoid moderated relationships

The null hypothesis has always bothered me, principally because you claim to be testing for a thing you hope to not find. This book lays out a more sinister problem: the odds of "no change" are so vanishingly small that your ability to find a 'significant' deviation from that null is easily created by simply collecting more samples. Moving to a distinction between nil (zero change) and null (insignificant change, defined by the authors as <1%) does change the statistics by forcing use of the noncentral F distribution, but it finally allows the null hypothesis to actually test something.

All of this is in the context of Type I and Type II errors. When you use the traditional null hypothesis, your chances of actually having "no change" are so small that the risk of a Type I error was already essentially zero, and yet most statistical methods revolve around helping and hoping to prevent it. The Type II error has been largely ignored, although the last 20 years have shown increasing awareness of the risk associated with the error of not finding an effect that actually does exist. This book lays the necessary groundwork to understand statistical power in a largely conversational fashion.

Cut to the chase: {alpha} is your willingness to accept a Type I, and is 0.05 by convention. {beta} is your willingness to accept a Type II, and is 0.2 by convention, giving a power of 0.8. Change those if you want, but it will make for an uphill battle to be accepted by peer review and won't actually change the risks as much as you imagine. Then, report the value (versus the nil or the null hypothesis) with a confidence interval so that the amount of change can be seen in contrast to the 'no change' effect.

At the end of the day, the real problem comes down to the size of the population being studied. If you're looking at over 1000 data points, you're almost certainly fine. If you're looking at less than 100, you're almost certainly at risk. I was hoping to learn something beyond the "just collect more" perspective of my Stat Mech class, but apparently state of the art isn't there yet.
Displaying 1 - 2 of 2 reviews

Can't find what you're looking for?

Get help and learn more about the design.