Jump to ratings and reviews
Rate this book

Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain

Rate this book
The brain is made up of 85 billion neurons, which are connected by over 100 trillion synapses. For over a century, a diverse array of researchers have been trying to find a language that can be used to capture the essence of what these neurons do and how they communicate – and how those communications create thoughts, perceptions and actions. The language they were looking for was mathematics, and we would not be able to understand the brain as we do today without it.

In Models of the Mind, author and computational neuroscientist Grace Lindsay explains how mathematical models have allowed scientists to understand and describe many of the brain's processes, including decision-making, sensory processing, quantifying memory, and more. She introduces readers to the most important concepts in modern neuroscience, and highlights the tensions that arise when bringing the abstract world of mathematical modelling into contact with the messy details of biology.

Each chapter focuses on mathematical tools that have been applied in a particular area of neuroscience, progressing from the simplest building block of the brain – the individual neuron – through to circuits of interacting neurons, whole brain areas and even the behaviours that brains command. Throughout Grace will look at the history of the field, starting with experiments done on neurons in frog legs at the turn of the twentieth century and building to the large models of artificial neural networks that form the basis of modern artificial intelligence. She demonstrates the value of describing the machinery of neuroscience using the elegant language of mathematics, and reveals in full the remarkable fruits of this endeavour.

400 pages, Hardcover

Published May 4, 2021

Loading interface...
Loading interface...

About the author

Grace Lindsay

1 book36 followers
Grace Lindsay is a computational neuroscientist currently living in London. She completed her PhD at the Center for Theoretical Neuroscience at Columbia University, where her research focused on building mathematical models of how the brain controls its own sensory processing. Before that, she earned a Bachelor's degree in Neuroscience from the University of Pittsburgh and received a research fellowship to study at the Bernstein Center for Computational Neuroscience in Freiburg, Germany. She was awarded a Google PhD Fellowship in Computational Neuroscience in 2016, and has spoken at several international conferences. She is also the producer and co-host of Unsupervised Thinking, a podcast covering topics in neuroscience and artificial intelligence.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
164 (55%)
4 stars
93 (31%)
3 stars
35 (11%)
2 stars
3 (1%)
1 star
0 (0%)
Displaying 1 - 30 of 43 reviews
Profile Image for Brian Clegg.
Author 175 books2,476 followers
April 5, 2021
This is a remarkable book. When Ernest Rutherford made his infamous remark about science being either physics or stamp collecting, it was, of course, an exaggeration. Yet it was based on a point - biology in particular was primarily about collecting information on what happened rather than explaining at a fundamental level why it happened. This book shows how biologists, in collaboration with physicists, mathematicians and computer scientists, have moved on the science of the brain to model some of its underlying mechanisms.

Grace Lindsay is careful to emphasise the very real difference between physical and biological problems. Most systems studied by physics are a lot simpler than biological systems, making it easier to make effective mathematical and computational models. But despite this, huge progress has been made drawing on tools and techniques developed for physics and computing to get a better picture of the mechanisms of the brain.

In the book we see this from two directions - it's primarily about modelling the brain's processes and structures, but we also see how the field of artificial intelligence has learned a lot from what we know of the way the brain works (and doesn't work very well) in developing the latest generation of AI systems. Lindsay shows how we have come to get a better understanding of the mechanisms of neutrons, memory formation, sight, decision making and more, looking at both the detailed level of neurons and larger scale structure. Many of the chapters take us on entertaining diversions related to the history of the development of these ideas. When I mentioned the book to someone who works in neurology, the response was that most computational neurology books they'd come across contained a barrage of equations - Lindsay does this with hardly an equation in the text (the only one I remember is Bayes theorem), though there are a few in an appendix for those who like their content a bit crunchier.

The only real criticism I have is that it could have done with some paring back. The book felt a bit too long, too many people were name-checked, and too many bits of brain functionality were covered. I also wouldn't have finished the book with a 'grand unified theories of the brain' chapter, which had too much of an overview feel and threw in concepts like consciousness that require whole books in their own right - it would have been better if the last chapter had pulled things together and looked forward to the next developments. However, this remains an excellent introduction to an area that few of us probably know anything about, and all the more fascinating because of that.
Profile Image for Ali.
17 reviews2 followers
June 25, 2022
Story of collaboration of neuroscience and artificial intelligence.
I concluded that the brain got many specialized modules that were made by natural selection specifically for our survival in our environment.
Maybe there is no general intelligence algorithm, instead a very specialized structure, for survival that has intelligence as a byproduct of all those modules.
When you zoom in on details of the brain's mechanisms, you see that there are specific structures for different tasks like motion detection, categorization, movement coordination, several layers of visual processing, and specific neurons for audio source tracing.
Current neural networks architectures are so simple compared to the brains mess, and extracting a general intelligence model out of it seems impossible.
Profile Image for Nguyễn Nghị.
5 reviews3 followers
August 3, 2021
The author provides high-level descriptions of almost every family of models one would find in an introductory course in computational neuroscience, and she does so in such a clear and clean writing style. Accompanying each model is a vivid historical account of how it came about and, in a broader sense, how the enterprise of theoretical modeling has changed through time in the way it is perceived and practiced. Highly recommend this to anyone curious about the inquiries and methods of computational neuroscience or the neural sciences in general.
Profile Image for Lourens.
69 reviews1 follower
December 28, 2021
Accessible and easy to read introduction to the applications of a wide array of mathematical models in neuroscience. I had a relatively easy time with the mathematical concepts as most of them I've seen before in some form, but Lindsay does a great job giving intuitive explanations anyway.

Especially surprised to learn that the convolutional neural network (CNN), a dominant method in computer image recognition, was inspired in concept by the workings of our visual cortex. Whenever I heard people say that neural networks are modelled after how the brain works, I always assumed it was only the very high-level idea. The concept of the CNN was later modified and scaled up to be more useful for many computer applications. I learned from this book, these modified neural networks had started to represent the visual cortex even more. That is a fascinating discovery.

A wide array of different mathematical tools are covered in this book. Personally I enjoyed the mentions of graph theory/network science and information theory.

For those curious about how intuition about the brain's working can be translated into exact, quantifiable terms, this is a great book. There is much ambiguity around what goes on in our heads, and math definitely does not get rid of all of it (yet). But it is reassuring to see the progress that can be made this way.
45 reviews3 followers
May 11, 2021
Though it's elegantly written and impressive in its breadth, this book didn't quite completely work for me. It attempts to do many things at once and ends up doing all of them passably. In a nutshell, it is a history of the various ways in which mathematics are used in modeling different aspects of brain physiology and functionality. If this sounds extremely general, it's because it is.

To structure the discussion, the author associates each neuroscience (or psychology) problem with one mathematical (or physical) idea. Thus spike production is described by differential equations. Computing, memory forming are discussed using algebra. Eyesight brings convolutions. With movement we get some matrix theory. Neural coding (of course) provides an opportunity to introduce information theory. Mapping structure to function is paired with graph theory. The discussion of rational decision making introduces Bayesian probabilities (Kudos here to the author for -correctly- attributing "Bayes' rule" to its actual author Laplace). A final chapter is dedicated to attempts at representing the entire brain using an analog of (Gibbs) free energy.

It's a neat concept, but only leaves room for the most basic treatment of each topic, and so many interesting developments are just hinted at, or left out altogether.

The price of this flitting about is that no single idea is discussed in any depth, and also there is no personal view given by the author of how all these approaches might add up to a science. In the last chapter she appears to timidly hitch her wagon to Friston's "free energy" theories, but the problem here is that no-one seems to see a clear path to practical, useful results using this approach.
January 4, 2023
A good introduction to computational neuroscience. With few equations for such a math-heavy subject, it may be superficial for people with some background in the field, but it is clearly written and great for beginners to understand the concepts and intuition behind various models
93 reviews2 followers
July 8, 2021
A popular science guide to the role of physics, maths and engineering in our understanding of the brain.

I work in this field, so its kind of weird seeing things you've read about as research papers appearing in a popular science guide, like seeing the slow morphing of cutting edge research that proves to be correct into established knowledge as it happens.

It's a good book, clear and I definitely learnt stuff from it. Perfect book to explain to outsiders what the field does, and goes into some detail about the technical ideas. A lot of the technical ideas are mathematical and there are very few equations in the main body of the book, so there are long paragraphs that say in words what if most easily expressed in maths, but I guess that is how it must be; hopefully it will serve as an appetiser for some people to learn even more, I know I would have been super intrigued by this book as a physics undergrad.
Author 1 book17 followers
June 22, 2021
Mathematically modelling the brain (not the mind), the book does exactly what it promises. It provides around a dozen different mathematical ways of looking at specific brain activities and it links each example to the (neuro) biology of the processes involved, as well as descriptions of the historical scientists who made the discoveries.

The book combines scientific insights with detailed mathematical information, although it keeps the formulas out of the text in an appendix at the end. The book also provides asides about the historical figures it mentions. For example, we hear that Claude Shannon at Bell Laboratories used to do his thinking riding a bike around the lab whilst juggling (47%). Depending on readers preferences, they will find examples like this as either enlivening, or distracting.

Personally I enjoyed the detailed scientific descriptions and I particularly appreciated how the author showed the developments taking place, step by step.

For example, in chapter 3 we hear about the 1950s US Navy Computer, Perceptron. It was designed to mimic the binary nature of how brain neurons fire on and off, to give a truth/falsity model. But it was unable to differentiate two ons (trues) from two offs (falses) which was a major problem. So a multi-layered neural network approach was designed to use backpropagation to resolve the problem.

This solution worked, but it was no longer ‘modelling’ what neuroscientists thought was occurring in the brain. It was suggesting an alternative way in which the brain could work, and so it led to new investigations to see whether the brain was in fact working in that way.

The book is 400 pages, so it contains a lot of information. Even so I was occasionally left wanting more information about some threads in chapters.

For example, chapter 10 focuses upon how the brain can be modelled using probabilistic logic. One of the problems with this kind of logic is that the outputs depend very heavily upon what is assumed as a background normality (ie the Prior probability). At the end of the chapter the author asks whether these background probabilities are inherited or learned. She cites an experiment involving chickens who were subject from hatching, to light sources from below (ie contrary to an above model of sunlight). This seems to have surprised the chickens, thus suggesting elements of inherited expectations.

Similar experiments with human babies have shown surprise when toys defy laws of gravity. I was interested to know more about the extent to which basic knowledge could be inherited, rather than learned, but the chapter closed and we were onto the next topic.

Overall this is a very detailed and informative book which readers interested in Maths and Neurobiology will particularly enjoy.

This is an honest review of an Advance Review Copy of the text.
Profile Image for Gustavo Juantorena.
24 reviews2 followers
May 9, 2021
Al momento de poner la calificación siempre me parece necesario compararlo con libros de temática similar y en este caso es claro que existe muy poco. La Neurociencia Computacional es una sub-disciplina de la Neurociencia que no para de crecer y sin embargo no posee la misma popularidad que otras. Me parece que el trabajo de divulgación de Grace Lindsay es muy bueno, logrando explicar con suficiente sencillez temas muchas veces áridos y poco abordados en la literatura de divulgación "neuro" más clásica. Recomendado para cualquier persona interesada en el funcionamiento del sistema nervioso y la inteligencia artificial.
Profile Image for Dragana.
8 reviews
August 25, 2021
Every page was a pleasure! An amazing synthesis of the current state of the field and how we got here.

The author draws well elaborated connections across methods and theories we use today in neuroscience, interweaved with the stories of the scientific discoveries/steps and the researchers behind them.
Profile Image for Mishehu.
482 reviews24 followers
January 15, 2022
First-rate work of popular science: packed with fascinating detail and hugely engaging. This is an author to watch.
Profile Image for Ben Zimmerman.
129 reviews5 followers
September 2, 2022
In Models of the Mind, Grace Lindsey maps out the major mathematical models that have driven neuroscience through a narrative history. Although each chapter exists almost as a standalone narrative essay, common themes include the bi-directional impact of fields of neuroscience and computer science, the importance of understanding the history of an idea while building on top of it, and the fruitful impact of borrowing tools and ideas from other disciplines.
I thought it was also instructive to pay attention to what components of "modeling the mind" were not carried through the chapters - for instance, neurons are basically left out of the chapters of modeling rational behavior. This was interesting to me, and I was wondering that once we knew enough about the brain, whether we would model complex behavioral outcomes with neurons? The book also stimulates lots of thinking about when mathematical modeling is appropriate and when it isn't necessary and where the transition is between modeling components of the brain to modeling components of the mind. What does that difference mean?

Each chapter focuses on a particular mathematical tool that has been applied to some area of neuroscience. The first chapter introduces the utility of using mathematical models to think clearly about problems in neuroscience by offering a precise definitions and a rigorous system for manipulating abstract variables that can lead to insight. She also introduces the main objections to using mathematics in biology, which is that the complexity in biology makes it difficult to meaningfully utilize mathematical models. She outlines that the pitfalls of applying mathematics to the real world are oversimplification and an appeal to aesthetics, which are common in pure mathematics and physics.

Chapter 2 discusses the history leading up to the Hodgkin and Huxley model for action potentials, which goes into the history of modeling electricity in circuits, and then using those principles to model the electrical properties of neurons.

Chapter 3 discusses the early links between thinking about the brain and computation. I particularly loved this chapter because it emphasizes the direction of the history of thought in a unique way. In current times, it's common to think about the computer as a metaphor for the brain, and this metaphor is often criticized. But early on, the whole idea of the modern computer came from trying to emulate what the brain did. This chapter goes through the history of how McCulloch and Pitts worked together to imagine ways that neurons might instantiate logic, and how John von Neumann built off of that work to imagine building a computer. The chapter also discusses Dr. Frank Rosenblatt's Perceptron, which represents the first attempts at building artificial intelligence into a computer, and the first attempt of putting McCulloch-Pitts networks to test. Interestingly, this first instance made clear that learning in an artificial neural network would not define clear logic gates like McColluch and Pitts imagined.

Chapter 4 was about using Hopfield networks and attractors to model memory. The chapter begins by discussing concepts of the engram or memory trace in the brain and debates about where and how memories were stored between people like Hebb and Lashley. Hopfield, a physicist, contributed a mathematical model of neurons that could implement content-addressable memory - where some of the memory could be retrieved from just a part of it. It is a type of recurrent network that has certain states, called attractors, that other patterns of activity will naturally evolve towards. The chapter goes on towards a deeper discussion of memory, the evolution of Hopfield networks, and direct experimental evidence for certain kinds of networks.

Chapter 5 was about oscillations through excitation and inhibition. Oscillations work as noise reducers to synchronize neuronal firing. But oscillations require reciprocal inhibition, so this chapter goes a bit into the history of discovering inhibitory neurons. The chapter also discusses the need for balance between excitation and inhibition and investigating the limits and behaviors or large circuits with excitatory and inhibitory cells by using computation models built on the Hodgkin-Huxley models. The chapter also discusses the history of chaos theory, and how concepts of chaos theory have helped to understand how the brain maintains balance through chaos.

Chapter 6 discusses the application of convolution to neural networks to solve computer vision problems. I also really liked learning from this chapter how direct the connections between computer science and neuroscience were. The first convolutional neural networks were basically made to replicate Hubel and Wiesel's findings of the hierarchical structure of early visual pathways - to the extent that Kunihiko Fukushima, who invented the first convolutional neural networks, expressed frustration that Hubel and Wiesel had not published more information about the architecture further into the brain. This chapter also goes into the history of using convolutions for template matching in general, which gave some insight into where the idea came from to use convolution in neural networks, which I've personally always wondered about. This architecture was fairly disregarded for a while for problems in computer vision until quite recently when it was found that once the convolutional neural network was large enough it worked really well. That happened around 2012, and I was in an neuroengineering training program in my PhD at the time, and I remember how exciting it was that year, and the subsequent years when computer vision started outperforming human vision for classification tasks for the first time. In general, Lindsey does a great job making you feel like part of a great chain of human thought, starting hundreds of years ago and leading up right to today. I think especially as an academic, I live for this feeling and love books that produce such a vivid connection to past thinkers.

Chapter 7 is about the application of information theory to brains and really about attempts to quantify information, since that is the thing that the brain processes. This chapter had a nice discussion on how far you could actually use mathematical principles to learn about biological systems, since biology has a lot of constraints that may make it less than an ideal information processor.

Chapter 8 was about the motor cortex and debates about what the motor cortex actually does at its most fundamental level - does it send signals to particular muscles or does it fundamentally encode whole patterns of movements like reaching. The chapter goes into a history of research on the motor cortex and then the application of kinetics to modeling how neural firing translates into force on joints, and later kinematics to model directions of movements. The chapter also talks about studying populations of neurons and then using dimensionality reduction to understand what the population activity encodes.

Chapter 9 discusses applications of graph theory to structural networks by thinking about brain regions as nodes in a network and their connections as edges. These graph theoretical metrics have been used especially in humans at larger scales as biomarkers of certain psychiatric or medical disorders.

Chapter 10 discusses rational decision making through the lens of probability and Bayes' rule. In this chapter, I really liked the discussion of the debate between using Bayesian probability or not, mainly because using Bayesian probability requires that you have some explicit knowledge about what is called a "prior", and there are lots of arguments in science about how you get your knowledge about that prior for lots of applications.

Chapter 11 discusses reinforcement learning and computational strategies for sequential learning. This chapter goes into the history of reinforcement learning in behavioral psychology and takes us through the modeling evolution. First, behaviorists were mostly concerned with behavioral outcomes and rewards, but it didn't deal well with steps in sequential planning that didn't deal with rewards themselves. This led to the concept of the value function of a particular state, which is defined recursively as the reward + the value of the next state. This kind of modeling allows reinforcement learning to be good at learning longer tasks (like winning a level of a video game), where the reward doesn't come until after many actions.

Chapter 12 is the final chapter and glosses over some of the "grand unified theories of the brain," many of which I've encountered before in other pop science books. This includes the Karl Friston's free energy principle, which is a principle that the brain tries to minimize differences between the brain's predictions about the world and the actual information it receives, which is not easily falsifiable as a theory, but may be a useful guide for thinking. The chapter also touches on the Thousand Brains Theory by Jeff Hawkins and Tononi's Integrated Information Theory for explaining consciousness.

Overall, this was one of the best neuroscience pop science books that I've read. Scientifically, it is mature and diplomatic, and I think did a really fair job of describing important debates. This is refreshing to me when it feels like the expectation is more and more to take sides and commit to a camp. The book is fascinating and well-written, which is even more impressive since it was written by a relatively junior, neuroscientist, rather than a senior scientist or a professional writer. I read it as part of the neuroscience book club that I co-lead, and about half of the participants felt like there was too much math and experimental details and the other half felt like there weren't enough, so I think the balance of depth was just about perfect. I believe that Grace Lindsey successfully landed her first tenure-track faculty position the same year that I fumbled through my own unanimous rejections, and it was both intimidating and awe-inspiring to think that she represents the level that strong academic institutions are hiring new faculty at. If I had any criticism, it is maybe that the chapters feel fairly separate and not strung together solidly enough. This led me to feel like I wanted to learn a lot more about all of the sub-topics introduced in the chapters, and it sort of leaves you with a sense of wanting instead of deep satisfaction. But being left wishing a book was longer and deeper is far from the worst that a book can strive for.
5 reviews
December 11, 2022
Absolutely wonderful. I feel like i learned about cool ass shit. The book was so easy to follow considering how complicated the subject matter can get. It really highlights how far we've come and how far we still need to go in regards to a field i think the lay person know very little about.
5 reviews
August 15, 2021
This is one of the most enjoyable books I've read yet. A very nicely written overview of computational or theoretical neuroscience. As an outsider fan of the field, I felt the topics were very well presented. I particularly enjoyed the bits of human history behind the science. Not just for adding personality to the development of the field and making it more relatable, but there's also a lot to be learned from these stories

One thing I would have liked to see is if the author weaved in slightly longer reviews of the relevant previous chapters as the book progressed. There is a broad array of topics and many, many names, which is inevitable when you credit/cite people fairly. I read slowly and my memory is not great so I kept having to either go back to previous chapters or just move on every once in a while.

Finally, I appreciated ending the book with the topic of unifying theories and touching on consciousness.
Profile Image for Asim.
1 review4 followers
March 27, 2021
If you are part of an underrepresented computational/theoretical group in a Neuroscience institute, forget giving talks to get people interested, simply buy a couple of copies of this book and hide them in random places for people to find. I am pretty sure plenty of students will 'magically' start walking into your lab and asking questions!

I believe this book fills a very important gap in mainstream (read: general public friendly) neuroscience books. It nicely surveys the neuroscientific landscape from the perspective of quantitative approaches (Physics, Maths, Engineering) and personalities involved who shaped it. The book also contains an appendix for the mentioned mathematics for the more inclined reader. As an early career scientist who is dipping his feet into more theoretical approaches, I certainly found this volume both informative and inspirational. Definitely would recommend!
3 reviews
March 13, 2021
Well written narrative that brings significant individuals to life and clearly explains their contributions, based on a solid foundation of scientific research. Would appeal to those with a science background and the general reader.
3 reviews3 followers
September 25, 2022
A friendly introduction to computational neuroscience for a popular audience. The book illustrates how mathematical and computational models are used in neuroscience from the neuron, the "microscopic" scales of neuroscience, to neural networks, psychology and unified theories of the brain such as the free energy principle or integrated information theory.

We learn how a neuron works, how they communicate with each others and what kind of mathematics are used to understand them. Each chapter introduces an area of neuroscience along with some explanation behind the relevant mathematical models, all in plain english. For example we learn some of the history of neural networks and their inspiration from neuroscience and boolean logic followed by the first 1-layer perceptron and its revival with deep learning.

Along with the science, Grace added some spicy historical anecdotes about some key influential figures. We meet people like Hodgkin and Huxley, Huebel and Wiesel, Rosenblatt, Minksy and Karl Friston.

The book gives descent explanation of neuroscience in plain English, although it would have benefited from more illustrations in the form of figures at times. I particularly liked that it shows that a wide range of maths and computer science are not just useful in theoretical physics but are also very relevant to understand many scales involved in neuroscience. Still the author stays humble about the possibility of a grand unified theory of the brain and acknowledge the complexity of this beautiful system.
Profile Image for Thoriq Fauzan.
18 reviews
December 26, 2022
I can take three things from here. On one side, there's this sense of pride and wonder that humanity has figured SO MUCH of the underlying principles of the mechanics of our world. From the predominantly superstitious societies to the swich to rationality of the scientific methods not so long ago. From the believe that the mind is of elusive essence "imponderable" as described by a 19th century philosophy to us mere 200 years later actively advancing the field of neuroscience and even drawing inspirations from it for the next gen AI. We've gone a long way in a relatively short time and certainly will continue doing so.

On the other hand, there's actually very little we know about the brain compared to what there is to know. How does vision, or movement, or memory, or anything that we take for granted as our daily life experience work is still in active research and not yet completely understood. Will we ever get to comprehend the mechanics of our own brain? We can only continue trying, as they say our brain is probably the most complicated object in the universe.

Third, even if we will never have a full comprehension of our own brain (can a computer understand how its own CPU work?), approaching such goal will still reap us great benefits. The field of neuroscience has borrowed ideas from mathematics, physics, and many other disciplines and has certainly contributed back. Even now, AI is ubiquitous and will never come to be without scientists of the past's curiosity of their brain.
Profile Image for Yufeng.
11 reviews
February 26, 2022
Love it! A great book on the roles of mathematical models in the development of neuroscience. Here are the things I love about this book:
1. Very clear explanations! Great insights and intuitions.
(I'm a physics student who knows nothing about neuroscience; there's no problem following and I learned a great deal from this book.)
2. Objective and convincing, compared to some of the other pop science books I've read in the past.
3. Apparently covers all the important models/ideas currently in theoretical neuroscience. The book focuses on 11 topics and is developed in a very clear and logical structure.
4. Nice historical account; presents clearly the development of ideas and progress on each topic.

My favorite parts of the book include the discussion of neural code and information theory, understanding neural structure using graph theory, and the Bellman equation. I also really like the discussions of "priors" in the chapter about Bayesian models on rational decisions; I've been quite confused about the roles of priors and gained some new insight by reading Grace's explanations.

I don't know the field enough to offer any criticism, but it'd be great if there could be some equations in the text rather than being all in the appendix, or at least some reference in the text directing the reader to the appendix.
Profile Image for Chris Boutté.
Author 7 books134 followers
September 14, 2021
I’ve been interested in neuroscience for a while, but sometimes, books on the topic aren’t written for the average person. I’ve heard a lot of talk about this book from Grace Lindsay, so I decided to check it out even though I’m not much of a math guy, which scared me even more. Fortunately, Grace did an incredible job of making this book accessible to the average reader like myself. Throughout the book, the author not only breaks down complex topics, but you also learn the history of neuroscience as well as how mathematics and models have helped us understand the brain. There were still a few parts that I didn’t quite grasp, but for the most part, I was able to keep up, so I think many people would enjoy this book if they’re interested in the brain. Some of my favorite chapters were in the realm of topics I’m more familiar with such as reward-based learning, decision making, and some others. So, if you like this topic at all or are curious to get into it, I’m sure you’ll enjoy this book.
Profile Image for Lady.
799 reviews6 followers
August 24, 2021
models of the mind by Grace Lindsay 4/5 stars
This book will certainly help you understand the science, maths and engineering that goes into your mind working. Those science or maths minded people will certain like this book. I found this book very interesting. I learnt so much. I especially loved reading about the different type of research that has gone into understand the functions of the mind. The author used lots of peoples research and hypothesis to help prove or disprove the theories or how the mind actually function and the processes it has to go through. There are also a lot of famous names that crop you that you would never thought helped in finding out the brains functionality.
This book is certainly a very interest academic read. Go on give it a go learn this you never even considered before.

Profile Image for Dhananjay Tomar.
10 reviews
September 30, 2022
This one was beyond amazing! The best neuroscience book I've read so far. It provides a very decent overview of how the brain, the neurons, and the functioning of the brain have been modelled in different ways by neuroscientists. My background in Computer Science + AI allowed me to appreciate the models a lot more since I'm well familiar with them, but of course, you don't need any background to read the book.
I don't think I'd change a single thing in this book (the way it's been written, organised, its content etc.). I also enjoyed the very short biographies of various scientists mentioned in the book.
If you wanna understand how neuroscientists have been trying to make sense of the brain, then this book is for you.
Profile Image for Sambasivan.
939 reviews27 followers
December 25, 2022
Even though am in a reading frenzy towards the year end this book made me stop and savour.

This is not an easy book to read particularly if you would like to understand the mathematics given in the appendix.

The author has beautifully proved that neuroscience can be better illuminated and better advanced if one converts many of the observations and insights into mathematical models.

This trend is now increasingly being seen in neuroscience research which is a great news for humanity.

The various theories of thousand brains, integrated information theory, free energy principle etc that are being currently pursued to come out with a Grand Unified Theory of brain function is a most welcome development.

Phenomenally enriching read.
26 reviews1 follower
June 28, 2021
Incredible crash course in all the hallmarks and SOTA of Computational Neuroscience. Author Grace Lindsay covers the leading mathematical models behind Action Potentials, Perceptrons, Excitation and Inhibition, the Visual System, Coding and Information Theory, Motor Neuroscience and Kinematics, Connectomics and Graph Theory, the Bayesian Brain, Reward Prediction, Memory and more.

The author engages the reader with a granularity of depth that is right in the goldilocks zone. This book would have had a greater impact if I picked it up before formally studying these topics. A solid compilation nonetheless.
Profile Image for Shreyansh Singh.
12 reviews
December 29, 2022
What a phenomenal book!!
As someone who wanted to know more about neuroscience and the mental processes of the brain, this book was an exciting introduction. The fact that I have a Math and CS background and I am working AI, made the book even more fun. The author draws excellent parallels between how a variety of ML models are similar and were designed to model the brain. Although I won’t be able to remember all of the history of the progress of the study of each type of model of the brain, the fact that the book has all the historical mentions made it very intriguing as one course how this field evolved over time.
Profile Image for Sequoia.
108 reviews
January 7, 2022
Very pleasurable read. It spans through a huge range of (famous) topics and how different theories/methods from other fields are used in studying brain and sometimes, vice versa. It has some history of key players (scientists), the trajectory of certain theories, and ... humor. Succinct and understandable. I think it's great for people who are both doing studies within neuroscience in general or theoretical neuroscience in particular, and for interested audience, or who is considering going into these ares of study or research.
Profile Image for Rhys Lindmark.
92 reviews31 followers
May 19, 2021
This was a good, clear overview on how computer science/mathematical thinking has co-evolved with neurobiology. I like it because it gives a meta-view on Thousand Brains and Predictive Processing instead of arguing for them directly.

It was too long for me though. I didn't find much value in the personal historical stories behind the theories.

Still, v clear and easy enough for someone like me who isn't a computational neuroscientist.
19 reviews
April 22, 2022
Whether you come from the biological or the computational point of view, you will be educated by this book in a satisfying fashion. The project of quantifying neural (and cognitive) activity had always seemed dubious to me; Lindsay comprehensively documents how useful and how limited that approach has proven, adding to my knowledge of both disciplines. Many similar attempts at this marriage have fallen flat (for me). This is a rare success.
Profile Image for Jinbao.
21 reviews
September 3, 2022
A very detailed review of the field of computational neuroscience. I started this book because I recently got into this field, and I can say that after reading it I have definitely learned a lot.

But I have to say that the book is quite hardcore. For a complete layman like myself, very often I have to reread the paragraph to get the gist of it. I can be a little bit frustrating sometimes.

Overall, great book!
Displaying 1 - 30 of 43 reviews

Can't find what you're looking for?

Get help and learn more about the design.