Jump to ratings and reviews
Rate this book

Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests

Rate this book
Power analysis is an increasingly important concern in research, particularly in the social and behavioural sciences, where low power is a well-recognized problem. Power analysis helps researchers plan and evaluate studies and helps consumers of research make sense of what they read. This book presents a simple and general method for conducting power analysis in a wide range of studies. This book extends power analysis beyond the traditional, often criticized, hypothesis tests to modern testing methods. These tests query whether the effects of treatments, interventions etc, are sufficiently large to warrant attention, rather than asking whether any effect, no matter how small, exists.

128 pages, Hardcover

First published August 1, 1995

2 people are currently reading
38 people want to read

About the author

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
8 (57%)
3 stars
4 (28%)
2 stars
2 (14%)
1 star
0 (0%)
Displaying 1 - 2 of 2 reviews
110 reviews16 followers
October 31, 2016
tl;dr - fix the null hypothesis, use confidence intervals, and avoid moderated relationships

The null hypothesis has always bothered me, principally because you claim to be testing for a thing you hope to not find. This book lays out a more sinister problem: the odds of "no change" are so vanishingly small that your ability to find a 'significant' deviation from that null is easily created by simply collecting more samples. Moving to a distinction between nil (zero change) and null (insignificant change, defined by the authors as <1%) does change the statistics by forcing use of the noncentral F distribution, but it finally allows the null hypothesis to actually test something.

All of this is in the context of Type I and Type II errors. When you use the traditional null hypothesis, your chances of actually having "no change" are so small that the risk of a Type I error was already essentially zero, and yet most statistical methods revolve around helping and hoping to prevent it. The Type II error has been largely ignored, although the last 20 years have shown increasing awareness of the risk associated with the error of not finding an effect that actually does exist. This book lays the necessary groundwork to understand statistical power in a largely conversational fashion.

Cut to the chase: {alpha} is your willingness to accept a Type I, and is 0.05 by convention. {beta} is your willingness to accept a Type II, and is 0.2 by convention, giving a power of 0.8. Change those if you want, but it will make for an uphill battle to be accepted by peer review and won't actually change the risks as much as you imagine. Then, report the value (versus the nil or the null hypothesis) with a confidence interval so that the amount of change can be seen in contrast to the 'no change' effect.

At the end of the day, the real problem comes down to the size of the population being studied. If you're looking at over 1000 data points, you're almost certainly fine. If you're looking at less than 100, you're almost certainly at risk. I was hoping to learn something beyond the "just collect more" perspective of my Stat Mech class, but apparently state of the art isn't there yet.
Displaying 1 - 2 of 2 reviews

Can't find what you're looking for?

Get help and learn more about the design.