Status Updates From Pattern Recognition and Mac...
Pattern Recognition and Machine Learning (Information Science and Statistics) by
Status Updates Showing 1-18 of 18
Duc Nguyen
is on page 52 of 738
n+1 attempts reading this textbook....
— May 10, 2025 01:39AM
Add a comment
C. Ines
is on page 424 of 738
Why do I just now understand the connection between expectation maximization and k-means?
— Sep 21, 2021 01:52PM
Add a comment
Trang
is on page 30 of 738
I've been meaning to read this all-time classic for a while. Need to finally get this done cover-to-cover.
— Nov 14, 2017 12:39PM
Add a comment
Abdelkader Madoui
is on page 71 of 738
More interesting than I thought. Bishop has done a great introduction of probability, decision and information theories which are amazingly elaborated for the sake of the rest of the book.
— Aug 16, 2017 05:17AM
Add a comment
Evan
is starting
Textbook for information theory and machine learning courses.
— Aug 26, 2016 09:05AM
Add a comment
Alexis
is on page 127 of 738
Not gonna lie , I gave up on chapter 2 and just read the slides here instead http://lear.inrialpes.fr/~jegou/bisho... which has the cliff notes and useful graphs from the textbook. Good overview of parametric vs nonparameteric methods for density estimation. Intro to common probability distribution functions. My favorite concepts are Gaussian mixture models and Kernal density estimators.
— Jul 15, 2016 04:30AM
Add a comment
Alexis
is on page 78 of 738
2.3 is dense! I'm not sure if I want to study the entire derivation of Gaussian distribution. The first 8 minutes of this video on Gaussian distribution seems to cover the important properties https://www.youtube.com/watch?v=50Vgw... Will attempt to read it after 2.4
— Jul 09, 2016 05:25AM
Add a comment
Manny
is on page 55 of 738
The little biographical vignettes are very nice. I'd never heard about the feud between Jacob Bernoulli and his brother...
— Aug 22, 2010 12:59PM
Add a comment
Manny
is on page 55 of 738
It is said that von Neumann recommended to Shannon that he use the word entropy, not only because of its similarity to the quantity used in physics but also because "nobody knows what entropy really is, so in any discussion you will always have an advantage."
— Aug 22, 2010 03:21AM
1 comment
Manny
is on page 38 of 738
Nearly all the mass of a high-dimensional sphere is concentrated in a thin shell near the surface. So be careful about analogising from two dimensions to many.
— Aug 19, 2010 01:56PM
Add a comment
Manny
is on page 30 of 738
If this book were a spaceship, we'd now be out of the solar system and about to engage the hyperdrive. Fasten your seat belts.
— Mar 18, 2010 01:11AM
Add a comment
Manny
is on page 20 of 738
My knowledge of statistics, I will claim, is not a hazy 'un/And I know why you should and why you shouldn't be a Bayesian. The Major-General
— Mar 17, 2010 05:34AM
Add a comment
Manny
is on page 12 of 738
Starts off well, with a brilliant example illustrating the key concepts of data sparseness and overfitting.
— Mar 15, 2010 05:30AM
1 comment



