Brain Science Podcast discussion

2006-2010 > BSP 66

Comments Showing 1-3 of 3 (3 new)    post a comment »
dateDown arrow    newest »

message 1: by Alto (new)

Alto | 5 comments I was intrigued by the concepts presented in this podcast, mostly because as a computer scientist who has spent a large amount of studying, developing, and thinking about artificial neural networks, it challenges a number of concepts with which I work. This is never a bad thing and this book has definitely been added near the top of my queue.

Specifically I was bothered by two particular aspects of the argument.

1.) Dead Reckoning: First, while calculating a position (x,y coordinate) as a function of velocity and time is indeed relatively simple to program on a computer, the computation is utterly meaningless unless it can be interpreted as an offset from some origin. What would this origin be? Home? Wouldn't this mean that then the animal has to travel through the origin to go from point a to b, or is the brain doing trigonometry as well? Second, what about overflow? If the animal is indeed storing the coordinate in a traditional computational way, then there must be some fixed size to the values these coordinates can have. What happens when the animal travels a distance that exceeds this value?

2.) I feel as though the brief mention of neural networks was largely a misrepresentation based on of a specific type of ANN, specifically the multilayer perceptron. A model of much more interest from a theoretical and cognitive point of view, at least to me, is that of auto-associative and hetero-associative networks.

I could also mention the large number of problems classical computing interpretations face (the 100 step rule for example) when attempting to explain brain functioning, but I think I will digress for now.

I would be very interested to hear other listeners responses and perspectives.

message 2: by Virginia (new)

Virginia MD (gingercampbell) | 321 comments Mod
Alto wrote: "I was intrigued by the concepts presented in this podcast, mostly because as a computer scientist who has spent a large amount of studying, developing, and thinking about artificial neural networks..."

You make some very interesting points, and I look forward to hearing your comments after you read the book because it is always difficult to portray the complexity of a book in a mere one hour interview.

I also hope you will enjoy BSP 73, which will be available on 3/26/11.

message 3: by Andriuskulikauskas (last edited Jan 15, 2014 11:23AM) (new)

Andriuskulikauskas | 8 comments Ginger, thank you for summarizing so many books with your podcasts. It's inspiring to realize that so many basic ideas have yet to be fleshed out and that innovative researchers are encouraging big picture thinking. I have a Ph.D. in Math (Combinatorics), have taken classes in automata theory and know a bit about neural networks.

I understood Randy Gallistel's point that the need to compare information in countless ways means that these comparisons can't be hardwired directly. There needs to be a way to temporarily access two different memories for the purpose of comparing them. This suggests that there is a topology that can access memories so as to link them. For example, they may be accessed through a tree structure, or a more complicated network.

However, in your podcast he did not provide any example where memories need to be erased. Read-write systems are set up so that the same memory slot can be reused by rewriting the information, forever losing the previous information and having simply the updated information. It seems straightforward that a jay might retain a memory of each cache that it hides in its lifetime. Does the jay forget the caches that it finds? Anyways, it doesn't lack for memory of thousands of caches. It simply doesn't have the memory of any of the millions of comparisons that it could make. And it perhaps doesn't need to. Similarly, an ant may well remember the thousands of landmarks of note that it visits. What it needs is a way to navigate along a map of landmarks, or simply back and forth along a line of landmarks. It doesn't need to erase any of these landmarks. It may possibly remember a life's worth.

The question then is, how does the brain work with "variable" information in "generic" ways? How does an ant's brain calculate its "navigation"? How does a jay manage its "database"? Storing the memory doesn't seem to be the challenge. It's possible for a single neuron to code, through its synapses, all manner of information, including location, time, actors, much like an object oriented database, where each object is stored separately. The challenge is how to do generic comparisons. Another challenge is how to define generic data types, that is, make sure that each object gets a location, time, etc.

Experimental psychologist Norman Anderson and his students have, over the decades, established a decision making theory, Information Integration Theory, which I think is relevant here. In my own words, in hundreds of experiments they have asked human subjects to rate various qualities on scales, say, from 1 to 10, but especially, combinations of these qualities. In particular tasks, different people with their conscious minds access and apply the same unconscious rule, either weighted averaging, addition or multiplication. These are the kinds of generic operations that I think Randy Gillistel would like to find neurological explanations for.

For example, ask a human subject to rate gifts on a scale of 1 to 10, where 10 is best. Suppose that a subject rates a diamond ring as 10 and a sock as 2. Also have the subject rate the two combined. The subject will rate "diamond ring and sock" as a weighted average such as 6. This is counterintuitive. It means that people average the value of gifts. If you want to impress somebody, then you should only give them one gift, the best one that you can.

What is relevant here is that people can consciously access certain numerical constants as if they were individual neurons. For example, individuals have a generic sense of what probability phrases like "likely" or "almost certain" mean to them. They can tell you as a percentage. But they can also give more specific answers in certain general contexts, such as what "likely" means as regards "passing a test" or "showing up for a movie". It turns out that an integration rule applies here, by which the unconscious mind is integrating (summing, I think) the generic definition with the given context. It seems that the individual is accessing that numerical "gut feeling" which their unconscious mind acts on with the relevant integration operation. We are storing and adjusting perhaps millions of such gut feelings, but not all of their possible comparisons, not all of their possible integrations.

They have redone Piaget's work with rigorous experiments and shown that many of his ideas turned out to be wrong. But children and adults do apply integration rules of the kind "length + width = area" as can be seen when they are asked to choose the larger cookie. They have shown how these types of rules apply in many dozens of domains in decision making.

My point is that there seem to be general mechanisms for such generic calculations. They presumably do not need to leave behind any memory of the operation itself. They just need to provide the answer and apply it. However, the "numerical constants" do not ever need to be repurposed, which is to say, truly "rewritten", as happens with computer memory. The neurons (numerical constants) may be adjusted, may be clarified or even reset, but they likely retain whatever meaning they were dedicated to. That meaning is given by the synaptic relationships with other neurons, other objects.

There should be, however, some small portion(s) of the brain that is dedicated to the general integration of such numerical constants. It is a faculty of decision-making.

I'm curious if there are people suffering brain damage who lack any ability to apply such information integration rules. This would mean they would lack any decision making capability. More interestingly would be to consider people who have lost part of such integration ability. Is it possible to lose one of the three abilities (to average, add or multiply)?

Norman Anderson's work is not known as well as it should be. I think it may be related to Daniel Kahneman's work on "fast thinking" and "slow thinking".

The main reason that I'm writing is that I'm curious if "rewriting" is really the issue here. Or is it more about supporting generic operations, something that neural networks don't seem to be natural for.

back to top