More on this book
Kindle Notes & Highlights
Using Code Examples
http://katherine.cox-buday.com/concurrency-in-go.
Concurrency is a property of the code; parallelism is a property of the running program.
The chunks of our program may appear to be running in parallel, but really they’re executing in a sequential manner faster than is distinguishable. The CPU context switches to share time between different programs, and over a coarse enough granularity of time, the tasks appear to be running in parallel. If we were to run the same binary on a machine with two cores, the program’s chunks might actually be running in parallel.
The first is that we do not write parallel code, only concurrent code that we hope will be run in parallel. Once again, parallelism is a property of the runtime of our program, not the code.
The third and final interesting thing is that parallelism is a function of time, or context.
For example, if our context was a space of five
seconds, and we ran two operations that each took a second to run, we would consider the operations to have run in parallel. If our context was one second, we would consider the operations to have run sequentially.
CSP stands for “Communicating Sequential Processes,” which is both a technique and the name of the paper that introduced it. In 1978, Charles Antony Richard Hoare published the paper in the Association for Computing Machinery (more popularly referred to as ACM).
Package sync provides basic synchronization primitives such as mutual exclusion locks. Other than the Once and WaitGroup types, most are intended for use by low-level library routines. Higher-level synchronization is better done via channels and communication.
In particular, consider structuring your program so that only one goroutine at a time is ever responsible for a particular piece of data. Do not communicate by sharing memory. Instead, share memory by communicating.
There are also numerous articles, lectures, and interviews where various members of the Go core
team espouse the CSP style over primitives l...
This highlight has been truncated due to consecutive passage length restrictions.
That said, Go does provide traditional locking mechanisms in the sync package. Most locking issues can be solved using either channels or traditional locks. So which should you use? Use whichever is most expressive and/or most simple.
Are you trying to transfer ownership of data?
If you have a bit of code that produces a result and wants to share that result with another bit of code, what you’re really doing is transferring ownership of that data.
One large benefit of doing so is you can create buffered channels to implement a cheap in-memory queue and thus decouple your producer from your consumer. Another is that by using channels, you’ve implicitly made your concurrent code composable with other concurrent code.
Are you trying to guard internal state of a struct?
By using memory access synchronization primitives, you can hide the implementation detail of locking your critical section from your callers.
Remember the key word here is internal. If you find yourself exposing locks beyond a type, this should raise a red flag. Try to keep the locks constrained to a small lexical scope.
Are you trying to coordinate multiple pieces of logic?
Remember that channels are inherently more composable than memory access synchronization primitives. Having locks scattered throughout your object-graph sounds like a nightmare, but having channels everywhere is expected and encouraged! I can compose chann...
This highlight has been truncated due to consecutive passage length restrictions.
You will find it much easier to control the emergent complexity that arises in your software if you use channels because of Go’s select statement, and their ability t...
This highlight has been truncated due to consecutive passage length restrictions.
Is it a performance-critical section?
Go’s philosophy on concurrency can be summed up like this: aim for simplicity, use channels when possible, and treat goroutines like a free resource.
Coroutines are simply concurrent subroutines (functions, closures, or methods in Go) that are nonpreemptive — that is, they cannot be interrupted. Instead, coroutines have multiple points throughout which allow for suspension or reentry.
Goroutines don’t define their own suspension or reentry points; Go’s runtime observes the runtime behavior of goroutines and automatically suspends them when they block and then resumes them when they become unblocked. In a way this makes them preemptable, but only at points where the goroutine has become blocked. It is an elegant partnership between the runtime and a goroutine’s logic.
Thus, goroutines can be considered a special cla...
This highlight has been truncated due to consecutive passage length restrictions.
Go’s mechanism for hosting goroutines is an implementation of what’s called an M:N scheduler, which means it maps M green threads to N OS threads.
Go follows a model of concurrency called the fork-join model.1 The word fork refers to the fact that at any point in the program, it can split off a child branch of execution to be run concurrently with its parent. The word join refers to the fact that at some point in the future, these concurrent branches of execution will join back together. Where the child rejoins the parent is called a join point
The go statement is how Go performs a fork, and the forked threads of execution are goroutines.
WaitGroup is a great way to wait for a set of concurrent operations to complete when you either don’t care about the result of the concurrent operation, or you have other means of collecting their results. If neither of those conditions are true, I suggest you use channels and a select statement instead.
Here we call Add with an argument of 1 to indicate that one goroutine is beginning. Here we call Done using the defer keyword to ensure that before we exit the goroutine’s closure, we indicate to the WaitGroup that we’ve exited. Here we call Wait, which will block the main goroutine until all goroutines have indicated they have exited.
Notice that the calls to Add are done outside the goroutines they’re helping to track.
remember from “Goroutines” that we have no guarantees about when the goroutines will be scheduled; we could reach the call to Wait before either of the goroutines begin.
It’s customary to couple calls to Add as closely as possible to the goroutines they’re helping to track, but sometimes you’ll find Add called to track a group of goroutines all at once.
wg.Add(numGreeters)
Mutex stands for “mutual exclusion” and is a way to guard critical sections of your program.
A Mutex provides a concurrent-safe way to express exclusive access to these shared resources.
whereas channels share memory by communicating, a Mutex shares memory by creating a convention developers must follow to synchronize access to the memory. You are