Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications
Rate it:
Open Preview
11%
Flag icon
In general, when there are multiple objectives, it’s a good idea to decouple them first because it makes model development and maintenance easier. First, it’s easier to tweak your system without retraining models, as previously explained. Second, it’s easier for maintenance since different objectives might need different maintenance schedules.
11%
Flag icon
In the mind-over-data camp, there’s Dr. Judea Pearl, a Turing Award winner best known for his work on causal inference and Bayesian networks. The introduction to his book The Book of Why is entitled “Mind over Data,” in which he emphasizes: “Data is profoundly dumb.” In one of his more controversial posts on Twitter in 2020, he expressed his strong opinion against ML approaches that rely heavily on data and warned that data-centric ML people might be out of a job in three to five years: “ML will not be the same in 3–5 years, and ML folks who continue to follow the current data-centric paradigm ...more
13%
Flag icon
If data models describe the data in the real world, databases specify how the data should be stored on machines. We’ll continue to discuss data storage engines, also known as databases, for the two major types of processing: transactional and analytical.
13%
Flag icon
An ML system can work with data from many different sources. They have different characteristics, can be used for different purposes, and require different processing methods. Understanding the sources your data comes from can help you use your data more efficiently.
13%
Flag icon
One source is user input data, data explicitly input by users. User input can be text, images, videos, uploaded files, etc. If it’s even remotely possible for users to input wrong data, they are going to do it. As a result, user input data can be easily malformatted. Text might be too long or too short. Where numerical values are expected, users might accidentally enter text. If you let users upload files, they might upload files in the wrong formats. User input data requires more heavy-duty checking and processing. On top of that, users also have little patience. In most cases, when we input ...more
13%
Flag icon
Another source is system-generated data. This is the data generated by different components of your systems, which include various types of logs and system outputs such as model predictions.
13%
Flag icon
Because debugging ML systems is hard, it’s a common practice to log everything you can. This means that your volume of logs can grow very, very quickly. This leads to two problems. The first is that it can be hard to know where to look because signals are lost in the noise. There have been many services that process and analyze logs, such as Logstash, Datadog, Logz.io, etc. Many of them use ML models to help you process and make sense of your massive number of logs.
14%
Flag icon
Row-Major Versus Column-Major Format The two formats that are common and represent two distinct paradigms are CSV and Parquet. CSV (comma-separated values) is row-major, which means consecutive elements in a row are stored next to each other in memory. Parquet is column-major, which means consecutive elements in a column are stored next to each other.
14%
Flag icon
Overall, row-major formats are better when you have to do a lot of writes, whereas column-major ones are better when you have to do a lot of column-based reads.
14%
Flag icon
NumPy Versus pandas One subtle point that a lot of people don’t pay attention to, which leads to misuses of pandas, is that this library is built around the columnar format. pandas is built around DataFrame, a concept inspired by R’s Data Frame, which is column-major. A DataFrame is a two-dimensional table with rows and columns.
15%
Flag icon
Text Versus Binary Format CSV and JSON are text files, whereas Parquet files are binary files. Text files are files that are in plain text, which usually means they are human-readable. Binary files are the catchall that refers to all nontext files. As the name suggests, binary files are typically files that contain only 0s and 1s, and are meant to be read or used by programs that know how to interpret the raw bytes.
15%
Flag icon
Data models describe how data is represented.
16%
Flag icon
The most important thing to note about SQL is that it’s a declarative language, as opposed to Python, which is an imperative language. In the imperative paradigm, you specify the steps needed for an action and the computer executes these steps to return the outputs. In the declarative paradigm, you specify the outputs you want, and the computer figures out the steps needed to get you the queried outputs.
16%
Flag icon
The document model is built around the concept of “document.” A document is often a single continuous string, encoded as JSON, XML, or a binary format like BSON (Binary JSON). All documents in a document database are assumed to be encoded in the same format. Each document has a unique key that represents that document, which can be used to retrieve it.
17%
Flag icon
Because the document model doesn’t enforce a schema, it’s often referred to as schemaless. This is misleading because, as discussed previously, data stored in documents will be read later. The application that reads the documents usually assumes some kind of structure of the documents. Document databases just shift the responsibility of assuming structures from the application that writes the data to the application that reads the data.
18%
Flag icon
The graph model is built around the concept of a “graph.” A graph consists of nodes and edges, where the edges represent the relationships between the nodes. A database that uses graph structures to store its data is called a graph database. If in document databases, the content of each document is the priority, then in graph databases, the relationships between data items are the priority.
18%
Flag icon
A repository for storing structured data is called a data warehouse. A repository for storing unstructured data is called a data lake. Data lakes are usually used to store raw data before processing. Data warehouses are used to store data that has been processed into formats ready to be used.
18%
Flag icon
Transactional databases are designed to process online transactions and satisfy the low latency, high availability requirements. When people hear transactional databases, they usually think of ACID (atomicity, consistency, isolation, durability).
18%
Flag icon
Atomicity To guarantee that all the steps in a transaction are completed successfully as a group. If any step in the transaction fails, all other steps must fail also. For example, if a user’s payment fails, you don’t want to still assign a driver to that user.
18%
Flag icon
Consistency To guarantee that all the transactions coming through must follow predefined rules. For example, a transaction must be made by a valid user.
18%
Flag icon
Isolation To guarantee that two transactions happen at the same time as if they were isolated. Two users accessing the same data won’t change it at the same time. For example, you don’t want...
This highlight has been truncated due to consecutive passage length restrictions.
18%
Flag icon
Durability To guarantee that once a transaction has been committed, it will remain committed even in the case of a system failure. For example, after you’ve ordered a ride and y...
This highlight has been truncated due to consecutive passage length restrictions.
18%
Flag icon
However, transactional databases don’t necessarily need to be ACID, and some developers find ACID to be too restrictive. According to Martin Kleppmann, “systems that do not meet the ACID criteria are sometimes called BASE, which stands for Basically Available, Soft state, and E...
This highlight has been truncated due to consecutive passage length restrictions.
20%
Flag icon
The most popular styles of requests used for passing data through networks are REST (representational state transfer) and RPC (remote procedure call).
20%
Flag icon
Request-driven architecture works well for systems that rely more on logic than on data. Event-driven architecture works better for systems that are data-heavy.
20%
Flag icon
For many problems, you need not only batch features or streaming features, but both. You need infrastructure that allows you to process streaming data as well as batch data and join them together to feed into your ML models.
20%
Flag icon
To do computation on data streams, you need a stream computation engine (the way Spark and MapReduce are batch computation engines). For simple streaming computation, you might be able to get away with the built-in stream computation capacity of real-time transports like Apache Kafka, but Kafka stream processing is limited in its ability to deal with various data sources. For ML systems that leverage streaming features, the streaming computation is rarely simple. The number of stream features used in an application such as fraud detection and credit scoring can be in the hundreds, if not ...more
22%
Flag icon
Nonprobability sampling is when the selection of data isn’t based on any probability criteria.
22%
Flag icon
Convenience sampling
22%
Flag icon
Snowball sampling
22%
Flag icon
Judgment sampling
22%
Flag icon
Quota sampling
22%
Flag icon
The samples selected by nonprobability criteria are not representative of the real-world data and therefore are riddled with selection biases.2 Because of these biases, you might think that it’s a bad idea to select data to train ML models using this family of sampling methods. You’re right. Unfortunately, in many cases, the selection of data for ML models is still driven by convenience.
22%
Flag icon
In the simplest form of random sampling, you give all samples in the population equal probabilities of being selected.4 For example, you randomly select 10% of the population, giving all members of this population an equal 10% chance of being selected. The advantage of this method is that it’s easy to implement. The drawback is that rare categories of data might not appear in your selection. Consider the case where a class appears only in 0.01% of your data population. If you randomly select 1% of your data, samples of this rare class will unlikely be selected. Models trained on this selection ...more
22%
Flag icon
To avoid the drawback of simple random sampling, you can first divide your population into the groups that you care about and sample from each group separately. For example, to sample 1% of data that has two classes, A and B, you can sample 1% of class A and 1% of class B. This way, no matter how rare class A or B is, you’ll ensure that samples from it will be included in the selection. Each group is called a stratum, and this method is called stratified sampling. One drawback of this sampling method is that it isn’t always possible, such as when it’s impossible to divide all samples into ...more
22%
Flag icon
In weighted sampling, each sample is given a weight, which determines the probability of it being selected. For example, if you have three samples, A, B, and C, and want them to be selected with the probabilities of 50%, 30%, and 20% respectively, you can give them the weights 0.5, 0.3, and 0.2. This method allows you to leverage domain expertise. For example, if you know that a certain subpopulation of data, such as more recent data, is more valuable to your model and want it to have a higher chance of being selected, you can give it a higher weight.
23%
Flag icon
Reservoir sampling is a fascinating algorithm that is especially useful when you have to deal with streaming data, which is usually what you have in production. Imagine you have an incoming stream of tweets and you want to sample a certain number, k, of tweets to do analysis or train a model on. You don’t know how many tweets there are, but you know you can’t fit them all in memory, which means you don’t know in advance the probability at which a tweet should be selected. You want to ensure that: Every tweet has an equal probability of being selected. You can stop the algorithm at any time and ...more
23%
Flag icon
Importance sampling is one of the most important sampling methods, not just in ML. It allows us to sample from a distribution when we only have access to another distribution.
23%
Flag icon
Despite the promise of unsupervised ML, most ML models in production today are supervised, which means that they need labeled data to learn from. The performance of an ML model still depends heavily on the quality and quantity of the labeled data it’s trained on.
24%
Flag icon
The canonical example of tasks with natural labels is recommender systems. The goal of a recommender system is to recommend to users items relevant to them.
25%
Flag icon
If hand labeling is so problematic, what if we don’t use hand labels altogether? One approach that has gained popularity is weak supervision. One of the most popular open source tools for weak supervision is Snorkel, developed at the Stanford AI Lab.11 The insight behind weak supervision is that people rely on heuristics, which can be developed with subject matter expertise, to label data. For example, a doctor might use the following heuristics to decide whether a patient’s case should be prioritized as emergent: If the nurse’s note mentions a serious condition like pneumonia, the patient’s ...more
26%
Flag icon
Semi-supervised learning is a technique that was used back in the 90s,16 and since then many semi-supervision methods have been developed. A comprehensive review of semi-supervised learning is out of the scope of this book. We’ll go over a small subset of these methods to give readers a sense of how they are used. For a comprehensive review, I recommend “Semi-Supervised Learning Literature Survey” (Xiaojin Zhu, 2008) and “A Survey on Semi-Supervised Learning” (Engelen and Hoos, 2018). A classic semi-supervision method is self-training. You start by training a model on your existing set of ...more
26%
Flag icon
Transfer learning refers to the family of methods where a model developed for a task is reused as the starting point for a model on a second task. First, the base model is trained for a base task. The base task is usually a task that has cheap and abundant training data. Language modeling is a great candidate because it doesn’t require labeled data. Language models can be trained on any body of text—books, Wikipedia articles, chat histories—and the task is: given a sequence of tokens,18 predict the next token. When given the sequence “I bought NVIDIA shares because I believe in the importance ...more
26%
Flag icon
Active learning is a method for improving the efficiency of data labels. The hope here is that ML models can achieve greater accuracy with fewer training labels if they can choose which data samples to learn from. Active learning is sometimes called query learning—though this term is getting increasingly unpopular—because a model (active learner) sends back queries in the form of unlabeled samples to be labeled by annotators (usually humans).
27%
Flag icon
Class imbalance typically refers to a problem in classification tasks where there is a substantial difference in the number of samples in each class of the training data. For example, in a training dataset for the task of detecting lung cancer from X-ray images, 99.99% of the X-rays might be of normal lungs, and only 0.01% might contain cancerous cells.
27%
Flag icon
The classical example of tasks with class imbalance is fraud detection. Most credit card transactions are not fraudulent. As of 2018, 6.8¢ for every $100 in cardholder spending is fraudulent.29 Another is churn prediction. The majority of your customers are probably not planning on canceling their subscription. If they are, your business has more to worry about than churn prediction algorithms. Other examples include disease screening (most people, fortunately, don’t have terminal illness) and resume screening (98% of job seekers are eliminated at the initial resume screening30).
28%
Flag icon
Precision = True Positive / (True Positive + False Positive) Recall = True Positive / (True Positive + False Negative) F1 = 2 × Precision × Recall / (Precision + Recall)
28%
Flag icon
Many classification problems can be modeled as regression problems. Your model can output a probability, and based on that probability, you classify the sample. For example, if the value is greater than 0.5, it’s a positive label, and if it’s less than or equal to 0.5, it’s a negative label. This means that you can tune the threshold to increase the true positive rate (also known as recall) while decreasing the false positive rate (also known as the probability of false alarm), and vice versa. We can plot the true positive rate against the false positive rate for different thresholds. This ...more
28%
Flag icon
Like F1 and recall, the ROC curve focuses only on the positive class and doesn’t show how well your model does on the negative class. Davis and Goadrich suggested that we should plot precision against recall instead, in what they termed the Precision-Recall Curve. They argued that this curve gives a more informative picture of an algorithm’s performance on tasks with heavy class imbalance.
29%
Flag icon
Data augmentation is a family of techniques that are used to increase the amount of training data. Traditionally, these techniques are used for tasks that have limited training data, such as in medical imaging. However, in the last few years, they have shown to be useful even when we have a lot of data—augmented data can make our models more robust to noise and even adversarial attacks.