Jump to ratings and reviews
Rate this book

Programming Distributed Systems

Rate this book
The author surveys languages for programming distributed systems and introduces a new model (the shared data-object model) and programming language (ORCA).

The text includes:
- An in-depth survey of language constructions for distributed programming;
- Reference to nearly 100 languages;
- A detailed case study of the design and implementation of a new programming language called Orca based on a shared data-object model;
- Six example programs and an evaluation of their performance on three different platforms.
Contents

1. Introduction
2. Distributed Programming Without Language Support
3. Language Support for Programming Distributed Systems
4. Languages for Programming Distributed Systems
5. The Shared Data-object Model
6. Implementation
7. Example Programs and Their Performance
8. Conclusions

Appendices
A. Matrix Multiplication
B. The All-Pairs Shortest Paths
C. The Travelling Salesman Problem
D. The Alpha-Beta Search
E. Successive Overrelaxation
F. Performance of the Orca Programs
G. Performance of the C Programs

References
Index

269 pages, Paperback

First published January 1, 1990

1 person want to read

About the author

Henri E. Bal

4 books

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
0 (0%)
3 stars
1 (100%)
2 stars
0 (0%)
1 star
0 (0%)
Displaying 1 of 1 review
Profile Image for Alejandro Teruel.
1,332 reviews254 followers
September 23, 2023
This was a 1989 Vrije Universiteit Ph. D. dissertation published by Silicon Press in 1990. It surveys the programming languages mechanisms used at the time to program applications on distributed systems using coarse grain concurrency and identified a new mechanism based on shared data-objects which hides the physical distribution of the data-objects from the programmer. Bal designed an experimental simple, type-safe and clean programming language called Orca in order to test the expressiveness and efficiency of the mechanism. In order do this, he programmed a set of distributed algorithms to solve five problems: integer matrix multiplication, the all-pairs shortest path problem, the travelling salesman problem, game tree search, successive overrelation and a chess problem solver. He implemented Orca on three platforms with ten homogenous cpus each: a shared memory multiprocessor, an Ethernet-based distributed systems allowing multicasts, and an Ethernet-based distributed system running Andrew Tanenbaum's distributed operating system Amoeba which only allows point-to-point remote procedure calls and thus simulates multicasts.

The results Bal obtained, though limited (systems of 10 processors are now considered minuscule), were very promising for the time. As far as I can tell from a preliminary search on Internet, Orca continued to be developed and used up to 1994. However, almost thirty years later in 2023, most of the distributed programming languages mentioned in the book have been defunct for quite some years. Be forewarned, that although I worked on parallel programming in the 1990s and helped develop a parallel version of Simplex on a transputer-based system, I have not kept up with the field and thus cannot provide an accurate assesment of the book's interest for the current (2023) reader.

After a brief chapter on distributed programming without language support in which a case study for the Amoeba distributed operating system is provided, the book includes a two chapter survey on language support and programming languages for distributed systems at a key period in concurrency development. In the chapter providing an overview of language support, Bal broadly touches on issues such as fine-grained and coarse grained parallelism, mapping parallel computation onto processors, interprocess communication and synchronization (message passing, data sharing, expressing non-determinism) and dealing with fault tolerance.

The programming language survey includes imperative, and to a lesser degree functional and logical programming languages. For imperative languages, Bal mainly looks at both distributed and shared memory models, including mechanisms such as synchronous message passing, asynchronous message passing, rendezvous, monitors, remote procedure calls, atomic transactions and distributed tuple spaces singling out brief case studies on languages such as Occam (and its foundational model CSP), NIL, Ada, Concurrent C, Brinch Hansen's Distributed Processes, Gregory Andrew's Synchronizing Resources (SR), and Barbara Liskov's Argus. The author also overviews a parallel functional programming language ParAlfl and two parallel logic programming languages Concurrent Prolog and PARLOG. Bal was particularly interested in distributed data structures and looks at the object-based language Esmerald and distributed tuple spaces in Linda. After evaluating the possibilities available at the time, Bal decided to explore to design and evaluate a simple language for programming distributed algorithms, which he called Orca, based on a shared data-object model. The model can be understood as providing classes for objects whose operations (methods) are indivisible and which can be blocked while their guards are false. Such objects appear to the programmer to be “shared-memory” objects. Bal implements the shared data object construct on a shared-memory multiprocessor system, a straight-forwards implementation mainly used to compare the performance algorithms running on two kinds of distributed memory systems, one with hardware support for efficient and reliable multicasts and the other where multicasts are simulated on top of point to point services. The run time system decides whether and how shared data objects are safetly replicated on processors and whether they are migrated to new processors. The shared memory model is implemented using two phase locking protocols to prevent key inconsistency problems. Orca assumes reliable processors so it does not incorporate recovery from failures.

The author obtained excellent speed-ups on problems where the communication overhead necessary to simulated shared data objects was kept under control, but -not surprisingly- speedup dropped sharply on the platform simulating multicasts when the programs relied more heavily and frequently on up-to-date data from shared objects.

I was curious about the book which I had obtained -but not read- in the 1990s when I was working in the area. For an old hand like me, it proved an interesting and well written book, vividly reminding me of the state of parallel programming at the time, and the ingenuity with which different mechanisms were proposed and tested. If you are interested in programming language mechanisms for concurrency, their trade-offs and their relationship to operating system mechanisms, I would recommend you look at Jean Bacon's Concurrent Systems. Operating systems, database and distributed systems: An integrated Approach (1998 for the second edition, she also published a third edition in 2002).

The book can be borrowd from the OpenLibrary.
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.