Concurrency in Go: Tools and Techniques for Developers
Rate it:
Open Preview
Kindle Notes & Highlights
1%
Flag icon
Using Code Examples
1%
Flag icon
http://katherine.cox-buday.com/concurrency-in-go.
10%
Flag icon
Chapter 2. Modeling Your Code: Communicating Sequential Processes
Van Tran
.h1
10%
Flag icon
The Difference Between Concurrency and Parallelism
Van Tran
.h2
10%
Flag icon
Concurrency is a property of the code; parallelism is a property of the running program.
10%
Flag icon
The chunks of our program may appear to be running in parallel, but really they’re executing in a sequential manner faster than is distinguishable. The CPU context switches to share time between different programs, and over a coarse enough granularity of time, the tasks appear to be running in parallel. If we were to run the same binary on a machine with two cores, the program’s chunks might actually be running in parallel.
10%
Flag icon
The first is that we do not write parallel code, only concurrent code that we hope will be run in parallel. Once again, parallelism is a property of the runtime of our program, not the code.
10%
Flag icon
The third and final interesting thing is that parallelism is a function of time, or context.
10%
Flag icon
For example, if our context was a space of five
10%
Flag icon
seconds, and we ran two operations that each took a second to run, we would consider the operations to have run in parallel. If our context was one second, we would consider the operations to have run sequentially.
11%
Flag icon
What Is CSP?
Van Tran
.h2
11%
Flag icon
CSP stands for “Communicating Sequential Processes,” which is both a technique and the name of the paper that introduced it. In 1978, Charles Antony Richard Hoare published the paper in the Association for Computing Machinery (more popularly referred to as ACM).
13%
Flag icon
Go’s Philosophy on Concurrency
Van Tran
.h2
13%
Flag icon
Package sync provides basic synchronization primitives such as mutual exclusion locks. Other than the Once and WaitGroup types, most are intended for use by low-level library routines. Higher-level synchronization is better done via channels and communication.
13%
Flag icon
In particular, consider structuring your program so that only one goroutine at a time is ever responsible for a particular piece of data. Do not communicate by sharing memory. Instead, share memory by communicating.
13%
Flag icon
There are also numerous articles, lectures, and interviews where various members of the Go core
13%
Flag icon
team espouse the CSP style over primitives l...
This highlight has been truncated due to consecutive passage length restrictions.
13%
Flag icon
That said, Go does provide traditional locking mechanisms in the sync package. Most locking issues can be solved using either channels or traditional locks. So which should you use? Use whichever is most expressive and/or most simple.
13%
Flag icon
Van Tran
.diagram .important .model .insight
13%
Flag icon
Are you trying to transfer ownership of data?
13%
Flag icon
If you have a bit of code that produces a result and wants to share that result with another bit of code, what you’re really doing is transferring ownership of that data.
13%
Flag icon
One large benefit of doing so is you can create buffered channels to implement a cheap in-memory queue and thus decouple your producer from your consumer. Another is that by using channels, you’ve implicitly made your concurrent code composable with other concurrent code.
13%
Flag icon
Are you trying to guard internal state of a struct?
13%
Flag icon
By using memory access synchronization primitives, you can hide the implementation detail of locking your critical section from your callers.
14%
Flag icon
Remember the key word here is internal. If you find yourself exposing locks beyond a type, this should raise a red flag. Try to keep the locks constrained to a small lexical scope.
14%
Flag icon
Are you trying to coordinate multiple pieces of logic?
14%
Flag icon
Remember that channels are inherently more composable than memory access synchronization primitives. Having locks scattered throughout your object-graph sounds like a nightmare, but having channels everywhere is expected and encouraged! I can compose chann...
This highlight has been truncated due to consecutive passage length restrictions.
14%
Flag icon
You will find it much easier to control the emergent complexity that arises in your software if you use channels because of Go’s select statement, and their ability t...
This highlight has been truncated due to consecutive passage length restrictions.
14%
Flag icon
Is it a performance-critical section?
14%
Flag icon
Go’s philosophy on concurrency can be summed up like this: aim for simplicity, use channels when possible, and treat goroutines like a free resource.
14%
Flag icon
Chapter 3. Go’s Concurrency Building Blocks
Van Tran
.h1
14%
Flag icon
Goroutines
Van Tran
.h2
14%
Flag icon
Coroutines are simply concurrent subroutines (functions, closures, or methods in Go) that are nonpreemptive — that is, they cannot be interrupted. Instead, coroutines have multiple points throughout which allow for suspension or reentry.
14%
Flag icon
Goroutines don’t define their own suspension or reentry points; Go’s runtime observes the runtime behavior of goroutines and automatically suspends them when they block and then resumes them when they become unblocked. In a way this makes them preemptable, but only at points where the goroutine has become blocked. It is an elegant partnership between the runtime and a goroutine’s logic.
14%
Flag icon
Thus, goroutines can be considered a special cla...
This highlight has been truncated due to consecutive passage length restrictions.
15%
Flag icon
Go’s mechanism for hosting goroutines is an implementation of what’s called an M:N scheduler, which means it maps M green threads to N OS threads.
15%
Flag icon
Go follows a model of concurrency called the fork-join model.1 The word fork refers to the fact that at any point in the program, it can split off a child branch of execution to be run concurrently with its parent. The word join refers to the fact that at some point in the future, these concurrent branches of execution will join back together. Where the child rejoins the parent is called a join point
15%
Flag icon
The go statement is how Go performs a fork, and the forked threads of execution are goroutines.
18%
Flag icon
The sync Package
Van Tran
.h2
18%
Flag icon
WaitGroup
Van Tran
.h3
18%
Flag icon
WaitGroup is a great way to wait for a set of concurrent operations to complete when you either don’t care about the result of the concurrent operation, or you have other means of collecting their results. If neither of those conditions are true, I suggest you use channels and a select statement instead.
19%
Flag icon
Here we call Add with an argument of 1 to indicate that one goroutine is beginning. Here we call Done using the defer keyword to ensure that before we exit the goroutine’s closure, we indicate to the WaitGroup that we’ve exited. Here we call Wait, which will block the main goroutine until all goroutines have indicated they have exited.
19%
Flag icon
Notice that the calls to Add are done outside the goroutines they’re helping to track.
19%
Flag icon
remember from “Goroutines” that we have no guarantees about when the goroutines will be scheduled; we could reach the call to Wait before either of the goroutines begin.
19%
Flag icon
It’s customary to couple calls to Add as closely as possible to the goroutines they’re helping to track, but sometimes you’ll find Add called to track a group of goroutines all at once.
19%
Flag icon
wg.Add(numGreeters)
19%
Flag icon
Mutex and RWMutex
Van Tran
.h3
19%
Flag icon
Mutex stands for “mutual exclusion” and is a way to guard critical sections of your program.
19%
Flag icon
A Mutex provides a concurrent-safe way to express exclusive access to these shared resources.
19%
Flag icon
whereas channels share memory by communicating, a Mutex shares memory by creating a convention developers must follow to synchronize access to the memory. You are
« Prev 1