G. Branden's Reviews > Masterminds of Programming: Conversations with the Creators of Major Programming Languages

Masterminds of Programming by Federico Biancuzzi
Rate this book
Clear rating

by
1866292
's review
Aug 22, 2009

liked it
Read in August, 2009

Interesting and fairly engaging, but not essential.

This was an impulse buy which threw an interrupt into my current-reading stack.

I'm glad I got it at a steep discount, because much of this looks like material one could just as profitably read on a webpage, and O'Reilly's listing it at $39.99.

As far as the content goes, I can say that it consistently held my interest. There are some entertaining snipes between various language designers about each others' work, but also a diplomatic (and, I would think, sincere) respect for each other as language designers faced with tough choices.

I'll see if I can't go back and see which of the interviewees said this, but I have to say I find seductive the statement that we don't really have a science of programming language design yet. In fact, we don't really even have a science of program debugging yet. As Knuth (not interviewed here, and only barely mentioned) famously pointed out, one can prove a program correct and yet it will still have bugs. This is often (or necessarily? I don't know) because we humans can misstate our constraints, or even forget to state them at all.

Are we in the alchemical stage of language design, searching for a philosopher's stone? Who is to be our Boyle, our Lavoisier, our Berzelius?

Or can we not answer those questions until we have first--or in conjunction--developed a solid science of mental processes? Can I goad a certain machine learning expert of my acquaintance to comment? (UPDATE: He did; see comment below.)

This book's value would have been increased to me had it contained a concluding chapter integrating much of the material presented and offering some analysis. That's probably out of its intended scope, and would likely have angered parts of the audience that didn't want to see the authors injecting themselves into the subject matter to that degree (or, by delegating authorship of such material out to another language luminary, crowning that figure as first among equals).

Though I can understand the decision to demur, I'd still like to see this material. I can make out some tensions in the philosophy of language extensions; that is, does one extend the language "directly", by adding new lexemes, or by adding new functions (usually as methods operating on an object)? I've worked with only a small number of the languages presented, but I get the impression that Forth, and some uses of Lisp/Scheme, follow the former approach, whereas the C family, Python, and Perl follow the latter. Warnock and Geschke, authors of PostScript, acknowledge that human interpretation of the language demands that the state of the stack be kept in mind for program statements to be comprehensible. This also seems to be the case with Forth and Lisp/Scheme; I find myself wondering if this trait is isomorphic to the language extension question. It doesn't seem obvious to me that it should be so--but perhaps there are constraints placed on the language designer such that one typically dictates the other? Or maybe this is just coincidence. Again, real analysis is needed.

Another area where consensus has yet to be reached involves the domain of verifiability, provability, and automated testing. Unit testing is popular in industry and among some of the interviewees, but I don't think anyone can argue that it isn't tedious. This matter, perhaps not surprisingly, dominates the interview with Bertrand Meyer more than anyone else's, and he holds forth with great confidence about his approaches. I find it tricky to reconcile that confidence with (what I perceive to be) Eiffel's ultra-minority status as a general-purpose programming language. Still, Meyer comes off as impressive and his book on OO programming is an acknowledged classic, so I think his work demands more of my attention.

Meyer also noted the SPARK language, a stripped-down form of Ada (and thus related to Pascal) which is provable (evidently in large part because it lacks pointers, and thus aliasing of memory addresses). Meyer dismisses SPARK as too limited to be broadly useful. However, I think he is hasty. I have written many small tools in my career, usually in a scripting language of some sort, and I wouldn't mind in the least if someone handed me a limited language I could use, and prove my little tools correct. I think I'd get a fair amount of satisfaction from that, particularly if it reduced the time I spent instrumenting them and chasing down bug reports about them. There is a large space of mundane little problems in the world, and I think it's a win, not a lose, if we can implement our solutions to those small, mundane problems in robust little boxes. It has to beat shoehorning more places into a complex system where things can go wrong.

Most of the classic Unix shell utilities are written in non-OO C. Wouldn't it be awesome if we could re-implement them in a provable language? (Admittedly, many/most of them use pointers extensively for reasons of speed/efficiency/elegance, and the C standard I/O library will do so, on the file streams you're manipulating, even if you don't. Oh, well--one can dream.)

Finally, I have to admit that some of the material in the book goes over my head. For example, at one point Meyer notes that dynamic binding and function pointers are incompatible language features. I have to assume this is because the way dynamic binding is implemented is by keeping function pointers internal to the implementation, such that they are rewritten as necessary at runtime once an object's type is determined. If one let the programmer muck with the pointers as well, there would be no guarantees that a method call would be resolvable (or sensible) at runtime. But I'm having to guess at this, so I could be wrong. Perhaps folks with a stronger background than I can get more out of this book.
3 likes · flag

Sign into Goodreads to see if any of your friends have read Masterminds of Programming.
Sign In »

Reading Progress

05/27/2009 page 159
33.13%
show 7 hidden updates…

Comments (showing 1-2 of 2) (2 new)

dateDown arrow    newest »

message 1: by Terran (new)

Terran Can I goad a certain machine learning expert of my acquaintance to comment?

Yeah, the key words in that question are "machine learning expert", which is notably not at all similar to "programming languages expert". :-P That said, I've never been one to shy away from a PL holy war. ;-)

I guess there are a couple of possible questions here. One is whether there can be an "ultimate" programming language -- the single language that will meet all of our programming needs. The second is whether there can be a science of designing programming languages such that we can state a need and then design a language to meet that need.

My feeling is that the answer to the first question is probably no and the second is possibly.

W.r.t. Q1, I feel no because the space of possible computational problems is literally infinite, and the space of "interesting" computational problems is, while not precisely infinite, at least very large and probably unbounded. People are always free to come up with new problems to solve. PLs allow us to express our thoughts concisely and clearly and unambiguously to the computer, but each problem domain requires a different set of constructs in order to say things concisely and so on. (One can make a heuristic argument based on data compressibility and/or the no free lunch theorem along these lines.) So long as humans remain creative and keep using computers to aid their creativity, I think there will be a need for new languages (broadly construed) in which to express their desires.

W.r.t. the second interpretation, I think that it may be possible to come up with a real science, or at least engineering, of PL design. Certainly, there's a rich community of PL people who are trying to do essentially that right now. In my eyes, their failure is that they're taking an almost exclusively logical approach to the question. Which is appropriate for thinking about formal systems, like Turing machines and universal computability and so on. But it's not really so well suited to designing things for human consumption.

In my view, the human factors and psychological components of PL design are at least as important as the logical issues -- possibly more so. Languages live or die, essentially, not on whether they're mathematically elegant, but on whether people like> them. Whether they're intuitive for people to program with. If you try to design languages from a purely mathematical perspective, you get things like functional programming. PL geeks are all over functional programming, claiming that it's the One True Way. But the marketplace has voted, showing that functional programming is essentially an utter failure. Object orientation is dramatically more successful, largely, I think, because it's somewhat closer to human cognitive processes. But that came about by slow evolution, rather than deliberate design rooted in psychological considerations.

And yet all of these are still addressing only a very small percent of humanity. The truth is that we're talking about different ways of carving up a space of technical elite -- people who can already think like a computer. The real issue is that humans simply don't think like computers. Humans are not psychologically or evolutionarily suited to express themselves with the required precision and unambiguity. Programming, as we understand it today, is far from a natural activity -- it takes many people months to even understand the basics, and I hazard to guess that most humans couldn't ever get really good at it if they tried.

At its heart, the goal of a PL is just to allow humans to communicate with computers. But it's accessible only to a very few because the "psychology" of computers is so far from the psychology of humans. My hope and vision is that we'll eventually arrive at a state where we can align the two well enough to allow most people to talk to computers, rather than a very few. But I suspect that this will require re-architecting computers from the ground up, and is probably AI-complete.


Joey "Warnock and Geschke, authors of PostScript, acknowledge that human interpretation of the language demands that the state of the stack be kept in mind for program statements to be comprehensible. This also seems to be the case with Forth and Lisp/Scheme; I find myself wondering if this trait is isomorphic to the language extension question."

An interesting question..

All languages are necessarily (I think) implemented with certain builtin primitives not implemented in the language itself. The number varies from only a few (forth and lisp) to thousands (perl).

Those with only a few such builtin lexemes tend to also have the purest ones, so that, aside from implementation, there's no way to tell if a function such as if-then-else is lexeme or something written in the language. This makes sense, because a language with few lexemes is encouraging more complex general purpose words to be built out of them, and so there's little value in making the lexemes inherently different.

Whether a stack is explicitly exposed (as in forth and postscript), or not (as in haskell and AFAIK, lisp), languages with uniform lexemes have inherently less forced structure, and less syntactic clues to structure. So the programmer has to keep track of the structure manually.


back to top