Code Complete
Rate it:
23%
Flag icon
Localize References to Variables The code between references to a variable is a “window of vulnerability.” In the window, new code might be added, inadvertently altering the variable, or someone reading the code might forget the value the variable is supposed to contain. It’s always a good idea to localize references to variables by keeping them close together. The idea of localizing references to a variable is pretty self-evident, but it’s an idea that lends itself to formal measurement. One method of measuring how close together the references to a variable are is to compute the “span” of a ...more
24%
Flag icon
Data types and control structures relate to each other in well-defined ways that were originally described by the British computer scientist Michael Jackson (Jackson 1975).
24%
Flag icon
Avoid variables with these kinds of hidden meanings. The technical name for this kind of abuse is “hybrid coupling” (Page-Jones 1988).
28%
Flag icon
Change to binary coded decimal (BCD) variables. The BCD scheme is typically slower and takes up more storage space, but it prevents many rounding errors. This is particularly valuable if the variables you’re using represent dollars and cents or other quantities that must balance precisely.
28%
Flag icon
International markets are becoming increasingly important, and it’s easier to translate strings that are grouped in a string resource file than it is to translate to them in situ throughout a program.
34%
Flag icon
Use the default clause to detect errors. If the default clause in a case statement isn’t being used for other processing and isn’t supposed to occur, put a diagnostic message in it:
35%
Flag icon
If it seems inefficient to use two loops where one would suffice, write the code as two loops, comment that they could be combined for efficiency, and then wait until benchmarks show that the section of the program poses a performance problem before changing the two loops into one.
36%
Flag icon
Efficient programmers do the work of mental simulations and hand calculations because they know that such measures help them find errors. Inefficient programmers tend to experiment randomly until they find a combination that seems to work. If a loop isn’t working the way it’s supposed to, the inefficient programmer changes the < sign to a <= sign. If that fails, the inefficient programmer changes the loop index by adding or subtracting 1. Eventually the programmer using this approach might stumble onto the right combination or simply replace the original error with a more subtle one. Even if ...more
37%
Flag icon
Don’t use recursion for factorials or Fibonacci numbers. One problem with computer-science textbooks is that they present silly examples of recursion. The typical examples are computing a factorial or computing a Fibonacci sequence. Recursion is a powerful tool, and it’s really dumb to use it in either of those cases. If a programmer who worked for me used recursion to compute a factorial, I’d hire someone else.
37%
Flag icon
First, computer-science textbooks aren’t doing the world any favors with their examples of recursion. Second, and more important, recursion is a much more powerful tool than its confusing use in computing factorials or Fibonacci numbers would suggest. Third, and most important, you should consider alternatives to recursion before using it. You can do anything with stacks and iteration that you can do with recursion. Sometimes one approach works better; sometimes the other does. Consider both before you choose either one.
40%
Flag icon
Tables provide an alternative to complicated logic and inheritance structures. If you find that you’re confused by a program’s logic or inheritance tree, ask yourself whether you could simplify by using a lookup table.
40%
Flag icon
I ain’t not no undummy. — Homer Simpson
40%
Flag icon
Not a few people don’t have not any trouble understanding a nonshort string of nonpositives — that is, most people have trouble understanding a lot of negatives.
42%
Flag icon
Programmers sometimes favor language structures that increase convenience, but programming seems to have advanced largely by restricting what we are allowed to do with our programming languages.
42%
Flag icon
“The competent programmer is fully aware of the strictly limited size of his own skull; therefore, he approaches the programming task in full humility” (Dijkstra 1972).
44%
Flag icon
Compared to the traditional code-test-debug cycle, an enlightened software-quality program saves money. It redistributes resources away from debugging and refactoring into upstream quality-assurance activities. Upstream activities have more leverage on product quality than downstream activities, so the time you invest upstream saves more time downstream. The net effect is fewer defects, shorter development time, and lower costs.
44%
Flag icon
IEEE Std 730-2002, IEEE Standard for Software Quality Assurance Plans. IEEE Std 1061-1998, IEEE Standard for a Software Quality Metrics Methodology. IEEE Std 1028-1997, Standard for Software Reviews. IEEE Std 1008-1987 (R1993), Standard for Software Unit Testing. IEEE Std 829-1998, Standard for Software Test Documentation.
45%
Flag icon
For a review of application code written in a high-level language, reviewers can prepare at about 500 lines of code per hour. For a review of system code written in a high-level language, reviewers can prepare at only about 125 lines of code per hour (Humphrey 1989).
45%
Flag icon
Research on perspective-based reviews has not been comprehensive, but it suggests that perspective-based reviews might uncover more errors than general reviews.
45%
Flag icon
Other organizations have found that for system code, an inspection rate of 90 lines of code per hour is optimal. For applications code, the inspection rate can be as rapid as 500 lines of code per hour (Humphrey 1989). An average of about 150–200 nonblank, noncomment source statements per hour is a good place to start (Wiegers 2002).
46%
Flag icon
IEEE Std 1028-1997, Standard for Software Reviews IEEE Std 730-2002, Standard for Software Quality Assurance Plans
46%
Flag icon
Testing is an important part of any software-quality program, and in many cases it’s the only part. This is unfortunate, because collaborative development practices in their various forms have been shown to find a higher percentage of errors than testing does, and they cost less than half as much per error found as testing does (Card 1987, Russell 1991, Kaplan 1995).
46%
Flag icon
Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves they don’t improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software-development techniques you use determine how many errors testing will find. If you want to lose weight, don’t buy a new scale; change your diet. If you want to improve your software, don’t just test more; develop better.
46%
Flag icon
As Figure 22-1 shows, depending on the project’s size and complexity, developer testing should probably take 8 to 25 percent of the total project time.
49%
Flag icon
IEEE Std 1008-1987 (R1993), Standard for Software Unit Testing IEEE Std 829-1998, Standard for Software Test Documentation IEEE Std 730-2002, Standard for Software Quality Assurance Plans
49%
Flag icon
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
51%
Flag icon
In more primitive environments, a misplaced comment or quotation mark can trip up the compiler. To find the extra comment or quotation mark, insert the following sequence into your code in C, C++, and Java: /*"/**/ This code phrase will terminate either a comment or string, which is useful in narrowing the space in which the unterminated comment or string is hiding.
54%
Flag icon
Figure 24-3. One strategy for improving production code is to refactor poorly written legacy code as you touch it, so as to move it to the other side of the “interface to the messy real world.”
63%
Flag icon
Handle change requests in groups. It’s tempting to implement easy changes as ideas arise. The problem with handling changes in this way is that good changes can get lost. If you think of a simple change 25 percent of the way through the project and you’re on schedule, you’ll make the change. If you think of another simple change 50 percent of the way through the project and you’re already behind, you won’t. When you start to run out of time at the end of the project, it won’t matter that the second change is 10 times as good as the first — you won’t be in a position to make any nonessential ...more
63%
Flag icon
Reestimate periodically. Factors on a software project change after the initial estimate, so plan to update your estimates periodically. As Figure 28-2 illustrates, the accuracy of your estimates should improve as you move toward completing the project. From time to time, compare your actual results to your estimated results, and use that evaluation to refine estimates for the remainder of the project.
64%
Flag icon
Humphrey, Watts S. A Discipline for Software Engineering. Reading, MA: Addison-Wesley, 1995. Chapter 5 of this book describes Humphrey’s Probe method, which is a technique for estimating work at the individual developer level.
64%
Flag icon
Standardize the measurements across your projects, and then refine them and add to them as your understanding of what you want to measure improves (Pietrasanta 1990).
PHONG B NGUYEN
What metrics would my manager like to have measured?
64%
Flag icon
Jones, Capers. Applied Software Measurement: Assuring Productivity and Quality, 2d ed. New York, NY: McGraw-Hill, 1997. Jones is a leader in software measurement, and his book is an accumulation of knowledge in this area. It provides the definitive theory and practice of current measurement techniques and describes problems with traditional measurements. It lays out a full program for collecting “function-point metrics.” Jones has collected and analyzed a huge amount of quality and productivity data, and this book distills the results in one place — including a fascinating chapter on averages ...more
64%
Flag icon
NASA Software Engineering Laboratory. Software Measurement Guidebook, June 1995, NASA-GB-001-94. This guidebook of about 100 pages is probably the best source of practical information on how to set up and run a measurement program. It can be downloaded from NASA’s website.
65%
Flag icon
Good programmers tend to cluster, as do bad programmers, an observation that has been confirmed by a study of 166 professional programmers from 18 organizations (Demarco and Lister 1999).
65%
Flag icon
Weinberg, Gerald M. The Psychology of Computer Programming, 2d ed. New York, NY: Van Nostrand Reinhold, 1998. This is the first book to explicitly identify programmers as human beings, and it’s still the best on programming as a human activity. It’s crammed with acute observations about the human nature of programmers and its implications.
65%
Flag icon
McConnell, Steve. Professional Software Development. Boston, MA: Addison-Wesley, 2004. Chapter 7, “Orphans Preferred,” summarizes studies on programmer demographics, including personality types, educational backgrounds, and job prospects. Carnegie, Dale. How to Win Friends and Influence People, Revised Edition. New York, NY: Pocket Books, 1981.
65%
Flag icon
In a hierarchy, every employee tends to rise to his level of incompetence. — The Peter Principle
65%
Flag icon
Gilb, Tom. Principles of Software Engineering Management. Wokingham, England: Addison-Wesley, 1988.
65%
Flag icon
IEEE Std 1045-1992, Standard for Software Productivity Metrics.
66%
Flag icon
Sandwich Integration
66%
Flag icon
Risk-Oriented Integration
66%
Flag icon
Feature-Oriented Integration
66%
Flag icon
Further Reading Much of this discussion is adapted from Chapter 18 of Rapid Development (McConnell 1996). If you’ve read that discussion, you might skip ahead to the "Continuous Integration" section.
66%
Flag icon
It improves morale. Seeing a product work provides an incredible boost to morale. It almost doesn’t matter what the product does. Developers can be excited just to see it display a rectangle! With daily builds, a bit more of the product works every day, and that keeps morale high.
66%
Flag icon
One side effect of frequent integration is that it surfaces work that can otherwise accumulate unseen until it appears unexpectedly at the end of the project. That accumulation of unsurfaced work can turn into an end-of-project tar pit that takes weeks or months to struggle out of. Teams that haven’t used the daily build process sometimes feel that daily builds slow their progress to a snail’s crawl. What’s really happening is that daily builds amortize work more steadily throughout the project, and the project team is just getting a more accurate picture of how fast it’s been working all ...more
67%
Flag icon
First, if you release a build in the morning, testers can test with a fresh build that day. If you generally release builds in the afternoon, testers feel compelled to launch their automated tests before they leave for the day. When the build is delayed, which it often is, the testers have to stay late to launch their tests. Because it’s not their fault that they have to stay late, the build process becomes demoralizing.
67%
Flag icon
Far from being a nuisance, the Windows 2000 team attributed much of its success on that huge project to their daily builds. The larger the project, the more important incremental integration becomes.
67%
Flag icon
Daily builds allow the project team rendezvous points that are frequently enough. As long as the team syncs up every day, they don’t need to rendezvous continuously.
68%
Flag icon
Some groups have found interesting alternatives to dependency-checking tools like make. For example, the Microsoft Word group found that simply rebuilding all source files was faster than performing extensive dependency checking with make as long as the source files themselves were optimized (header file contents and so on). With this approach, the average developer’s machine on the Word project could rebuild the entire Word executable — several million lines of code — in about 13 minutes.
« Prev 1 3