Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of data matching, and its scalability to large databases. Peter Christen’s book is divided into three Part I, “Overview”, introduces the subject by presenting several sample applications and their special challenges, as well as a general overview of a generic data matching process. Part II, “Steps of the Data Matching Process”, then details its main steps like pre-processing, indexing, field and record comparison, classification, and quality evaluation. Lastly, part III, “Further Topics”, deals with specific aspects like privacy, real-time matching, or matching unstructured data. Finally, it briefly describes the main features of many research and open source systems available today. By providing the reader with a broad range of data matching concepts and techniques and touching on all aspects of the data matching process, this book helps researchers as well as students specializing in data quality or data matching aspects to familiarize themselves with recent research advances and to identify open research challenges in the area of data matching. To this end, each chapter of the book includes a final section that provides pointers to further background and research material. Practitioners will better understand the current state of the art in data matching as well as the internal workings and limitations of current systems. Especially, they will learn that it is often not feasible to simply implement an existing off-the-shelf data matching system without substantial adaption and customization. Such practical considerations are discussed for each of the major steps in the data matching process.
I had a "record linking" problem (data over here, and data over there ... are they the same?) and I wanted an overview of the thinking in that field. This book did exactly what I needed it to: it gave me that overview, and allowed me to go find the right algorithm I needed.
I'm glad I have a print version of this: there's a fair bit of "see table 3.1 on page 33", and I'm not sure I'd find that easy to cope with on a little Kindle device.
This book gives a good overview of the problems and solutions around data linkage and deduplication: The problem of having multiple records referring to the same entity in different databases (so you want to do record linkage) or in a single database (deduplication).
The books starts with an overview of the process, and then provides details on each step in following chapters: preprocessing, indexing, record matching, classification and evaluation. One chapter discusses existing data matching systems, some of which are freely available. This feels a bit dated now (2018) but it might be useful. There is also a chapter on privacy but it focuses on data sharing between organizations, which I don’t think is the most common problem. The final chapter provides some topics that are currently developing, such as matching using distributed systems, and stream processing.
It’s an introduction to the different ways of solving these problems but it probably doesn’t go into enough detail to immediately implement something. For that, there’s a bibliography with over 200 items.
There is a fair bit of repetition in the book - let’s assume this is intentional to allow different sections to be read independently. That probably makes sense as this is a good book to go back to, to find that reference or method, then go to a more detailed text to find specifics.