Jump to ratings and reviews
Rate this book

Software Reliability: Principles and Practices

Rate this book
Deals constructively with recognized software problems. Focuses on the unreliability of computer programs and offers state-of-the-art solutions. Covers—software development, software testing, structured programming, composite design, language design, proofs of program correctness, and mathematical reliability models. Written in an informal style for anyone whose work is affected by the unreliability of software. Examples illustrate key ideas, over 180 references.

360 pages, Hardcover

First published September 22, 1976

85 people want to read

About the author

Glenford J. Myers

11 books19 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
7 (50%)
4 stars
5 (35%)
3 stars
1 (7%)
2 stars
0 (0%)
1 star
1 (7%)
Displaying 1 of 1 review
Profile Image for thirtytwobirds.
105 reviews55 followers
February 15, 2014
This book was written in 1976, and the biggest surprise you'll have when reading it is that we haven't really learned much of anything new about software reliability in almost 30 years.

This book is a wonderful tour of software reliability. I'm expecting to read it about once a year for the next five years or so and I'm sure I'll get something new out of it every time.

Don't let the outdated programming language examples (most of them are in PL/1) scare you. There's a lot of good concepts in the book, such as:

* Software reliability is a measure of not only the frequency of errors, but their severity as well. Trading many minor annoyances to avoid catastrophic failure may be worth it.
* Everything is a tradeoff, though not always in the way you might think. One might think that adding extra security measures (prevents users from editing others files, etc) would decrease reliability because it makes the software bigger. But often good security relies on proper isolation of things inside the software, which actually helps reliability.
* "Testing" is a word that's not really defined as well as it should be. A useful philosophy and definition is "testing is the act of executing a program and trying to find previously-unknown errors". This seems reasonable, but results in startling conclusions like: if you write some unit tests and they all pass, *you have failed at your goal*. It's certainly a useful mindset to have, it just takes some getting used to.
* Bugs often cluster together instead of being distributed evenly throughout a system. This seems pretty obvious -- the more complex parts of a system will surely have more bugs than the straightforward parts. But once you accept this, you have to accept that finding a bug in a section of a program makes it *more likely* that there are *other* bugs in that section! It's natural to think "okay I've fixed a bug here so this section is more stable now", but because errors tend to cluster it's actually a sign that that section is even *less* stable than you thought!

If you write software for a living, you really should give this a read. Skim the examples and try not to be put off by the old-looking code. And try not to get discouraged when you see Meyers talk about running programs in virtual machines with a hypervisor to increase isolation and you realize this was written thirty years ago. There's a lot of good stuff in here, dig in!
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.