In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directly addresses this need.As in earlier editions, E.R. Davies clearly and systematically presents the basic concepts of the field in highly accessible prose and images, covering essential elements of the theory while emphasizing algorithmic and practical design constraints. In this thoroughly updated edition, he divides the material into horizontal levels of a complete machine vision system. Application case studies demonstrate specific techniques and illustrate key constraints for designing real-world machine vision systems.
· Includes solid, accessible coverage of 2-D and 3-D scene analysis.· Offers thorough treatment of the Hough Transform―a key technique for inspection and surveillance.· Brings vital topics and techniques together in an integrated system design approach.· Takes full account of the requirement for real-time processing in real applications.
Extremely comprehensive overview of computer vision techniques prior to the current deep learning wave. The textbook has since been updated to a newer edition. While this won't teach you how to build a convolutional neural network to recognize pictures of cats, it covers almost every other conceivable vision technique, and I felt much more grounded after having read the whole thing cover to cover. (I am actually going over parts of it again to make sure I got all the math).
Beyond its comprehensiveness, the book is particularly notable for Davies' insistence on principled, rather than ad-hoc approaches. Again and again he begins a chapter describing a simple trick people once used, notes where the technique is weak, breaks down how the technique works - sometimes down to the math of the distance of pixels - and uses it to design a new, better technique, for which he presents empirical evidence of its superiority (usually in a paper he wrote or co-wrote :-D).