Computer Science Distilled: Learn the Art of Solving Computational Problems (Code is Awesome)
Rate it:
Open Preview
2%
Flag icon
Computer science is not about machines, in the same way that astronomy is not about telescopes. There is an essential unity of mathematics and computer science.
2%
Flag icon
When you’re on a complex task, keep your brain at the top of its game: dump all important stuff on paper.
3%
Flag icon
Programmers often use flowcharts for writing down computing processes. When doing so, you should follow these guidelines1 for others to understand your flowcharts:
4%
Flag icon
A model is a set of concepts that represents a problem and its characteristics.
7%
Flag icon
Hot Server  A server crashes if it’s overheating while the air conditioning is off. It also crashes if it’s overheating and its chassis cooler fails. In which conditions does the server work?
9%
Flag icon
Whenever you’re dealing with things that assume one of two possibilities, remember they can be modeled as logic variables.
10%
Flag icon
The working principles of the modern CPU are still based on boolean algebra. A modern CPU is just a circuit of millions of microscopic wires and logic gates that manipulate electric currents of information.
14%
Flag icon
When the outcome of an event does not influence the outcome of another event, they are independent. The probability that two independent events will happen is the product of their individual probabilities.
14%
Flag icon
When two events cannot happen simultaneously, they are mutually exclusive.
15%
Flag icon
When two mutually exclusive events cover all possible outcomes, they are complementary.
15%
Flag icon
Past events never affect the outcome of an independent event. Never. Ever.
16%
Flag icon
For instance, you can use the “Pigeonhole Principle” to prove at least two people in New York City have exactly the same number of hairs!
17%
Flag icon
A method is a list of unambiguous instructions for achieving a goal. A method that always requires a finite series of operations is called an algorithm.
17%
Flag icon
Time complexity is written (n. It gives the number of operations the algorithm performs when processing an input of size n. We also refer to an algorithm's (n) as its running cost.
20%
Flag icon
There's a special notation to refer to classes of growth: the Big-O notation.
20%
Flag icon
The notation is used for expressing the dominant term of algorithms' cost functions in the worst case—that's the standard way of expressing time complexity.
21%
Flag icon
That's why you need time complexity analysis when you design systems that handle very large inputs.
21%
Flag icon
Some algorithms always run for a constant duration regardless of input size—they're (1).
22%
Flag icon
During execution, algorithms need working storage to keep track of their ongoing calculations. This consumes computer memory, which is not infinite.
22%
Flag icon
The measure for the working storage an algorithm needs is called space complexity.
22%
Flag icon
We observe how space complexity evolves when the algorithm's input size grows,
22%
Flag icon
In future chapters, we'll see how clever data handling can improve space complexity.
24%
Flag icon
The iterative strategy consists in using loops (e.g. for, while) to repeat a process until a condition is met. Each step in a loop is called an iteration.
24%
Flag icon
The process can be written in a single while loop:
26%
Flag icon
We say there's recursion when a function delegates work to clones of itself.
26%
Flag icon
How do you code a function that returns the nth Fibonnacci number?
26%
Flag icon
Recursive algorithms have base cases, when the input is too small to be further reduced. Base cases for fib are numbers 1 and 2; for palindrome, they're words with one or zero characters.
27%
Flag icon
This simplicity comes at a price. Recursive algorithms spawn numerous clones of themselves when they run, introducing a computational overhead.
27%
Flag icon
This potential problem can be visualized in recursion trees: a diagram showing how the algorithm spawns more calls as it delves deeper in calculations.
27%
Flag icon
If performance must be maximized, we can avoid this overhead by rewriting recursive algorithms in a purely iterative form. Doing so is always possible. It's a trade: the iterative code generally runs faster but it's also more complex and harder to understand.
27%
Flag icon
The brute force strategy solves problems by inspecting all of the problem's possible solution candidates.
30%
Flag icon
Backtracking works best in problems where the solution is a sequence of choices and making a choice restrains subsequent choices. It identifies as soon as possible the choices you've made cannot give you the solution you want, so you can sooner step back and try something else. Fail early, fail often.
30%
Flag icon
A heuristic method, or simply a heuristic, is a method that leads to a solution without guaranteeing it is the best or optimal one.
30%
Flag icon
A very common heuristic approach to problems is the greedy approach. It consists in never coming back to previous choices.
31%
Flag icon
A salesman must visit n given cities, ending the trip in the city he started. Which travel plan minimizes the total distance traveled?
31%
Flag icon
Here's a simple greedy algorithm for this problem:   1.  Visit the nearest unvisited city. 2.  Repeat until all cities are visited.
36%
Flag icon
Dynamic programming is identifying repeated subproblems in order to compute them only once.
36%
Flag icon
We can fix it by storing fib calculations as we do them, only spawning fib calls for calculations not already stored. This trick for reusing partial calculations is called memoization.
39%
Flag icon
Many problems involve minimizing or maximizing a target value: find the shortest path, get the maximum profit, etc. They're called optimization problems.
39%
Flag icon
When the solution is a sequence of choices, we often use a strategy called branch and bound.
39%
Flag icon
Upper and Lower Bounds Bounds refer to the range of a value.
39%
Flag icon
We can often easily get suboptimal solutions: a short path, but maybe not the shortest; a big profit, but maybe not the biggest. These solutions provide bounds to the optimal solution. For instance, the shortest route between two places is never shorter than their straight linear distance. The linear distance is thus a lower bound of the shortest driving distance.
40%
Flag icon
The wise use of upper and lower bounds allowed us to find the optimal profit with minimal computational effort.
40%
Flag icon
We dynamically adapted our search space as we were exploring possibilities. To recap, here's how branch and bound works:   1.  Divide the problem into subproblems, 2.  Find upper and lower bounds of each new subproblem, 3.  Compare subproblem bounds of all branches, 4.  Return to step 1 with the most promising subproblem.
41%
Flag icon
backtracking, we remove paths after having explored them as far as we can, and we stop when we're OK with a solution. With branch and bound, we predict which paths are worst and we avoid wasting energy exploring them.
43%
Flag icon
An Abstract Data Type (ADT) is the specification of a group of operations that make sense for a given data type.
44%
Flag icon
The Stack is used when we have a pile of items, and only work with its top item.
44%
Flag icon
A Stack implementation must provide at least these two operations:   •  push(e): add an item e to the top of the stack, •  pop(): retrieve and remove the item on top of the stack.
45%
Flag icon
For implementing the "undo" feature in your text editor, every edition you make is pushed onto a stack. Should you want to undo, the text editor pops an edition from the stack and reverts it.
45%
Flag icon
The Queue is the Stack's antagonist. It's also used for storing and retrieving items, but the retrieved item is always the one in front of the Queue, i.e., the one that has been on the queue the longest.
« Prev 1 3 4