Kindle Notes & Highlights
Read between
June 13 - June 13, 2024
With big O notation we express the runtime in terms of—brace yourself—how quickly it grows relative to the input, as the input gets arbitrarily large.
Usually when we talk about space complexity, we're talking about additional space, so we don't include space taken up by the inputs.
A great engineer (startup or otherwise) knows how to strike the right balance between runtime, space, implementation time, maintainability, and readability.
You should develop the skill to see time and space optimizations, as well as the wisdom to judge if those optimizations are worthwhile.
Random Access Memory (RAM)—we can Access the bits at any Random address in Memory right away.
Spinning hard drives don't have this "random access" superpower, because there's no direct connection to each byte on the disc. Instead, there's a reader—called a head—that moves along the surface of a spinning storage disc (like the needle on a record player). Reading bytes that are far apart takes longer because you have to wait for the head to physically move along the disc.
Even though the memory controller can jump between far-apart memory addresses quickly, programs tend to access memory that's nearby. So computers are tuned to get an extra speed boost when r...
This highlight has been truncated due to consecutive passage length restrictions.
The processor has a cache where it stores a copy of stuff it's re...
This highlight has been truncated due to consecutive passage length restrictions.
This cache is much faster to read from than RAM, so the processor saves time whenever it can read something from cache instead of going out to RAM.
When the processor asks for the contents of a given memory address, the memory controller also sends the contents of a handful of nearby memory addresses. And the processor puts all of it in the cache.
Base 10 is also called decimal. Base 2 is also called binary.
The places in binary (base 2) are sequential powers of 2: 20=1 21=2 22=4 23=8 etc.
How many different numbers can we express with 1 byte (8 bits)? 28=256 different numbers.
In general, with n bits, we can express 2n different possible numbers!
we usually use 4 or 8 bytes (32 or 64 bits) for storing integers.
32-bit integers have 232 possible values—more than 4 billion 64-bit integers have 264 possible values—more than 10 billion billion (1019).
When is 32 bits not enough? When you're counting views on a viral video. YouTube famously ran into trouble when the Gangnam Style video hit over 232 views, forcing them to upgrade their view counts from 32-bit to 64-bit integers.
If we have a 64-bit fixed-length integer, it doesn't matter if that integer is 0 or 193,457—it still takes up the same amount of space in RAM: 64 bits.
In big O notation, we say fixed-width integers take up constant space or O(1) space. And because they have a constant number of bits, most simple operations on fixed-width integers (addition, subtraction, multiplication, division) take constant time (O(1) time).
RAM is basically an array already.
Arrays have fast lookups (O(1) time), but each item in the array needs to be the same size, and you need a big block of uninterrupted free memory to store the array.
This mapping of numbers to characters is called a character encoding.
since we can express characters as 8-bit integers, we can express strings as arrays of 8-bit characters.
Appending an item to an array is usually an O(1) time operation, but a single doubling append is an O(n) time operation since we have to copy all n items from our array.
The advantage of dynamic arrays over arrays is that you don't have to specify the size ahead of time, but the disadvantage is that some appends can be expensive. That's the tradeoff.
We would call each of these two-item arrays a node and we'd call this series of nodes a linked list.
Linked lists have worst-case O(1)-time appends, which is better than the worst-case O(n) time of dynamic arrays.
So linked lists have faster prepends (O(1) time) than dynamic arrays (O(n) time).
So if linked lists are so great, why do we usually store strings in an array? Because arrays have O(1)-time lookups. And those constant-time lookups come from the fact that all the array elements are lined up next to each other in memory.
So the tradeoff with linked lists is they have faster prepends and faster appends than dynamic arrays, but they have slower lookups.
The process we used to translate a key into an array index is called a hashing function.
Note that our quick lookups are only in one direction—we can quickly get the value for a given key, but the only way to get the key for a given value is to walk through all the values and keys. Same thing with arrays—we can quickly look up the value at a given index, but the only way to figure out the index for a given value is to walk through the whole array.
hashing function gives us the same answer for "lies" and "foes." This is called a hash collision.
In industry though, we usually wave our hands and say collisions are rare enough that on average lookups in a hash table are O(1) time. And there are fancy algorithms that keep the number of collisions low and keep the lengths of our linked lists nice and short.
what a logarithm is asking: "What power must we raise this base to, in order to get this answer?"

