More on this book
Kindle Notes & Highlights
Read between
January 28 - February 6, 2022
It’s hard to overstate hash tables’ usefulness. When I want to solve a problem, the two plans of attack I start with are “Can I use a hash table?” and “Can I model this as a graph?”
In general, for any list of n, binary search will take log2 n steps to run in the worst case, whereas simple search will take n steps.
You may not remember what logarithms are, but you probably know what exponentials are. log10 100 is like asking, “How many 10s do we multiply together to get 100?” The answer is 2: 10 × 10. So log10 100 = 2. Logs are the flip of exponentials.
Logs are the flip of exponentials.
Big O doesn’t tell you the speed in seconds. Big O notation lets you compare the number of operations. It tells you how fast the algorithm grows.
Here are five Big O run times that you’ll encounter a lot, sorted from fastest to slowest: O(log n), also known as log time. Example: Binary search. O(n), also known as linear time. Example: Simple search. O(n * log n). Example: A fast sorting algorithm, like quicksort (coming up in chapter 4). O(n2). Example: A slow sorting algorithm, like selection sort (coming up in chapter 2). O(n!). Example: A really slow algorithm, like the traveling salesperson (coming up next!).
Algorithm speed isn’t measured in seconds, but in growth of the number of operations. Instead, we talk about how quickly the run time of an algorithm increases as the size of the input increases. Run time of algorithms is expressed in Big O notation. O(log n) is faster than O(n), but it gets a lot faster as the list of items you’re searching grows.
Binary search is a lot faster than simple search. O(log n) is faster than O(n), but it gets a lot faster once the list of items you’re searching through grows. Algorithm speed isn’t measured in seconds. Algorithm times are measured in terms of growth of an algorithm. Algorithm times are written in Big O notation.
There are two different types of access: random access and sequential access. Sequential access means reading the elements one by one, starting at the first element. Linked lists can only do sequential access. If you want to read the 10th element of a linked list, you have to read the first 9 elements and follow the links to the 10th element. Random access means you can jump directly to the 10th element.
To solve a problem using D&C, there are two steps: 1. Figure out the base case. This should be the simplest possible case. 2. Divide or decrease your problem until it becomes the base case.
When you’re writing a recursive function involving an array, the base case is often an empty array or an array with one element. If you’re stuck, try that first.
If you like recursion, or you’re interested in learning a new language, check out Haskell.
To calculate the shortest path in an unweighted graph, use breadth-first search. To calculate the shortest path in a weighted graph, use Dijkstra’s algorithm.
This is called an approximation algorithm. When calculating the exact solution will take too much time, an approximation algorithm will work. Approximation algorithms are judged by How fast they are How close they are to the optimal solution
Sometimes algorithms aren’t an exact recipe. They’re a framework that you build your idea on top of.
Levenshtein distance measures how similar two strings are, and it uses dynamic programming. Levenshtein distance is used for everything from spell-check to figuring out whether a user is uploading copyrighted data.

