More on this book
Community
Kindle Notes & Highlights
So, now we’re faced with a generation of OO developers who use inheritance for one of two reasons: they don’t like typing, or they like types.
Prefer Interfaces to Express Polymorphism Interfaces and protocols give us polymorphism without inheritance.
Basically, look for anything that you know will have to change that you can express outside your main body of code, and slap it into some configuration bucket.
We prefer that you don’t do that. Instead, wrap the configuration information behind a (thin) API. This decouples your code from the details of the representation of configuration.
While static configuration is common, we currently favor a different approach. We still want configuration data kept external to the application, but rather than in a flat file or database, we’d like to see it stored behind a service API.
Multiple applications can share configuration information, with authentication and access control limiting what each can see Configuration changes can be made globally The configuration data can be maintained vi...
This highlight has been truncated due to consecutive passage length restrictions.
That last point, that configuration should be dynamic, is critical as we move toward highly available applications. The idea that we should have to stop and restart an application to change a single para...
This highlight has been truncated due to consecutive passage length restrictions.
Don’t push decisions to configuration out of laziness. If there’s genuine debate about whether a feature should work this way or that, or if it should be the users’ choice, try it out one way and get feedback on whether the decision was a good one.
Temporal coupling happens when your code imposes a sequence on things that is not required to solve the problem at hand.
Why is writing concurrent and parallel code so difficult? One reason is that we learned to program using sequential systems, and our languages have features that are relatively safe when used sequentially but become a liability once two things can happen at the same time.
There are better ways to construct concurrent applications. One of these is using the actor model, where independent processes, which share no data, communicate over channels using defined, simple, semantics. We talk about both the theory and practice of this approach in Topic 35, Actors and Processes
Finally, we’ll look at Topic 36, Blackboards. These are systems which act like a combination of an object store and a smart publish/subscribe broker. In their original form, they never really took off.
When people first sit down to design an architecture or write a program, things tend to be linear.
Activity diagrams show the potential areas of concurrency, but have nothing to say about whether these areas are worth exploiting. For example, in the piña colada example, a bartender would need five hands to be able to run all the potential initial tasks at once. And that’s where the design part comes in. When we look at the activities, we realize that number 8, liquify, will take a minute. During that time, our bartender can get the glasses and umbrellas (activities 10 and 11) and probably still have time to serve another customer.
Random Failures Are Often Concurrency Issues
You could also argue that functional languages, with their tendency to make all data immutable, make concurrency simpler. However, they still face the same challenges, because at some point they are forced to step into the real, mutable world.
The Erlang language and runtime are great examples of an actor implementation (even though the inventors of Erlang hadn’t read the original Actor’s paper).
And Erlang also offers hot-code loading: you can replace code in a running system without stopping that system.
One of the first blackboard systems was David Gelernter’s Linda. It stored facts as typed tuples.
Later came distributed blackboard-like systems such as JavaSpaces and T Spaces. With these systems, you can store active Java objects—not just data—on the blackboard, and retrieve them by partial matching of fields
Messaging Systems Can Be Like Blackboards
The Gift of Fear: And Other Survival Signals That Protect Us from Violence [de 98]
Many of us would prefer to put off making the initial commitment of starting.
Sometimes code just flies from your brain into the editor: ideas become bits with seemingly no effort. Other days, coding feels like walking uphill in mud. Taking each step requires tremendous effort, and every three steps you slide back two.
bigger picture are well.
Is there something you know you should do, but have put off because it feels a little scary, or difficult? Apply the techniques in this section. Time box it to an hour, maybe two, and promise yourself that when the bell rings you’ll delete what you did. What did you learn?
Programming by Coincidence
Fred doesn’t know why the code is failing because he didn’t know why it worked in the first place.
For code you write that others will call, the basic principles of good modularization and of hiding implementation behind small, well-documented interfaces can all help.
For example, Russian leaders always alternate between being bald and hairy: a bald (or obviously balding) state leader of Russia has succeeded a non-bald (“hairy”) one, and vice versa, for nearly 200 years.[51]
You can have “accidents of context” as well. Suppose you are writing a utility module. Just because you are currently coding for a GUI environment, does the module have to rely on a GUI being present? Are you relying on English-speaking users? Literate users? What else are you relying on that isn’t guaranteed?
Finding an answer that happens to fit is not the same as the right answer.
Coincidences can mislead at all levels—from generating requirements through to testing.
We want to spend less time churning out code, catch and fix errors as early in the development cycle as possible, and create fewer errors to begin with. It helps if we can program deliberately:
Always be aware of what you are doing.
Proceed from a plan,
Rely only on reliable things. Don’t depend on assumptions. If you can’t tell if something is reliable, assume the worst.
Also consider just what you’re doing in the code itself. A simple loop may well perform better than a complex, one for smaller values of , particularly if the algorithm has an expensive inner loop.
For those who like more detail than Sedgewick provides, read Donald Knuth’s definitive Art of Computer Programming books, which analyze a wide range of algorithms.
Refactoring [Fow19] is defined by Martin Fowler as a: disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.
The critical parts of this definition are that: The activity is disciplined, not a free-for-all External behavior does not change; this is not the time to add features
Refactoring is not intended to be a special, high-ceremony, once-in-a-while activity, like plowing under the whole garden in order to replant. Instead, refactoring is a day-to-day activity, taking low-risk small steps, more like weeding and raking. Instead of a free-for-all, wholesale rewrite of the codebase, it’s a targeted, precision approach to help keep the code easy to change. In order to guarantee that the ext...
This highlight has been truncated due to consecutive passage length restrictions.
Time pressure is often used as an excuse for not refactoring. But this excuse just doesn’t hold up: fail to refactor now, and there’ll be a far greater time investment to fix the problem down the road—when there are more dependencies to reckon with. Will there be more time available then? Nope.
Refactoring, as with most things, is easier to do while the issues are small, as an ongoing activity while coding. You shouldn’t need “a week to refactor” a piece of code—that’s a full-on rewrite.
Refactoring: Improving the Design of Existing Code [Fow19]
Don’t try to refactor and add functionality at the same time. Make sure you have good tests before you begin refactoring. Run the tests as often as possible. That way you will know quickly if your changes have broken anything. Take short, deliberate steps: move a field from one class to another, split a method, rename a variable. Refactoring often involves making many localized changes that result in a larger-scale change. If you keep your steps small, and test after each step, you will avoid prolonged debugging.
Back in the first edition we noted that, “this technology has yet to appear outside of the Smalltalk world, but this is likely to change….” And indeed, it did, as automatic refactoring is available in many IDEs and for most mainstream languages. These IDEs can rename variables and methods, split a long routine into smaller ones, automatically propagating the required changes, drag and drop to assist you in moving code, and so on.
The basic cycle of TDD is: Decide on a small piece of functionality you want to add. Write a test that will pass once that functionality is implemented. Run all tests. Verify that the only failure is the one you just wrote. Write the smallest amount of code needed to get the test to pass, and verify that the tests now run cleanly. Refactor your code: see if there is a way to improve on what you just wrote (the test or the function). Make sure the tests still pass when you’re done.
We strongly believe that the only way to build software is incrementally. Build small pieces of end-to-end functionality, learning about the problem as you go. Apply this learning as you continue to flesh out the code, involve the customer at each step, and have them guide the process.