Machine learning develops intelligent computer systems that are able to generalize from previously seen examples. A new domain of machine learning, in which the prediction must satisfy the additional constraints found in structured data, poses one of machine learning’s greatest learning functional dependencies between arbitrary input and output domains. This volume presents and analyzes the state of the art in machine learning algorithms and theory in this novel field. The contributors discuss applications as diverse as machine translation, document markup, computational biology, and information extraction, among others, providing a timely overview of an exciting field. Yasemin Altun, Gökhan Bakır, Olivier Bousquet, Sumit Chopra, Corinna Cortes, Hal Daumé III, Ofer Dekel, Zoubin Ghahramani, Raia Hadsell, Thomas Hofmann, Fu Jie Huang, Yann LeCun, Tobias Mann, Daniel Marcu, David McAllester, Mehryar Mohri, William Stafford Noble, Fernando Pérez-Cruz, Massimiliano Pontil, Marc’Aurelio Ranzato, Juho Rousu, Craig Saunders, Bernhard Schölkopf, Matthias W. Seeger, Shai Shalev-Shwartz, John Shawe-Taylor, Yoram Singer, Alexander J. Smola, Sandor Szedmak, Ben Taskar, Ioannis Tsochantaridis, S. V. N. Vishwanathan, and Jason Weston
Good book on structured output prediction. The introductory chapters and some of the latter ones gave some interesting and novel insights on kernel methods in general to me. I particularly liked the tutorial about energy models. Other chapters were somewhat too mathy and technical to bring their message across.
A rigorous, comprehensive, succinct and even attractively-priced and -bound introduction to machine learning developments since 2000, with an informed focus on bioinformatic applications throughout. This fine job from MIT Press belongs on every researcher's bookshelf.