Brain Science Podcast discussion

Discovering the Human Connectome
This topic is about Discovering the Human Connectome
19 views
2013 > BSP 103 Olaf Sporns

Comments Showing 1-10 of 10 (10 new)    post a comment »
dateDown arrow    newest »

message 1: by Ginger (last edited Nov 24, 2013 08:25AM) (new) - rated it 3 stars

Ginger Campbell (gingercampbell) | 321 comments Mod
BSP 103 is now online. It is an interview with Olaf Sporns, author of Discovering the Human Connectome. I also interviewed in back in BSP 74 abut his previous book Networks of the Brain.

Show Notes for BSP 103

link to audio for BSP 103


message 2: by Dalton (new)

Dalton Seymour | 20 comments I wish Olaf luck in discovering the connectivity embedded in the brain. I've seen elementary attempts to lay out the circuitry for the cortical column and it's meaningless - where are the other 5000 to 15000 connections and what do they represent. I think a more successful approach is to ask one's self, "what in the world are all those connections for?" and then begin trying to account for them and their purpose by deploying a scientific imagination to them on a holistic basis - think subnets of influence in a competitive environment. Once they have some notion about how something works, then go look for it. As Gary Marcus expressed in his book Kludge, peering into the 3D structure of the brain reveals a structure that looks like well packed spaghetti and such a mess that it's next to impossible to discern the trees for the forest. It helps a lot to know what to look for to find it. We are about to throw tons of money at the connectome project and to me, it looks like an exercise in futility (100 trillion connections or there abouts). Hypothesize first, investigate, examine to confirm supporting structures, build conceptually, and reproduce digitally. That's where success will come from.


message 3: by John (last edited Mar 06, 2014 05:57AM) (new)

John Brown | 52 comments Dalton wrote: "I wish Olaf luck in discovering the connectivity embedded in the brain. I've seen elementary attempts to lay out the circuitry for the cortical column and it's meaningless - where are the other 500..."

In one way I agree with you. Pinker used fMRI to go looking for separate regions for the look-up table of Anglo-Saxon verb past participles in English (give --> given, throw --> threw) on the one hand, and on the other the rules explaining how to add -ed for the French-derived verbs (try --> tried). Naturally he looked in Broca's versus Wernicke's areas, but could find no evidence.

On the other hand, Dehaene convincingly shows how processing switches over time between major brain areas during reading. Now to do this, there must be high-bandwidth fast myelinated interconnections between the fairly distant areas. That seems to me where the network-theory can come in, although I thought that work had already been thoroughly done by the anatomists. Analysing these different connection diagrams between Chinese vs. English readers, and between different animals, using correlation techniques, might be very illuminating.

I suspect mathematicians might be more usefully employed in trying to explain the in-brain equivalent of the "backward propagation" learning algorithm used in neural networks in Artificial Intelligence. I think I would look first in the cerebellum where the interconnection diagrams resemble sequential programmable Large-Scale-Integration chips from Electronics. If you can't back-propagate (axons transmit in just one direction) the only other learning solution seems to be to keep a chronological record of paired {input, desired-output} pairs and adjust synapse weightings accordingly, when off-line (ie dreaming during sleep). Has anybody done fMRI during sleep? You might get people to learn physical tasks, for example playing piano scales at varying speeds. That should give you a sequence of pairs where the frequency varies along the sequence. That ought to show up in variable oxygen usage over the night, within different parts of the cerebellum.


message 4: by Dalton (new)

Dalton Seymour | 20 comments John wrote: "Dalton wrote: "I wish Olaf luck in discovering the connectivity embedded in the brain. I've seen elementary attempts to lay out the circuitry for the cortical column and it's meaningless - where ar..."

Based on the Theory of Mind and my own internal yard stick, when it comes to language and thought in general, I tend to agree with those who believe that much of language involves the motor memory necessary for verbal communications. Such a view would be in line with Edelman's re-entry theory of playing back long term memories through the original circuits that originally established them. As for Broca's and Wernicke's areas, they play some critical role in recognition and expression of language, but I don't believe that focusing on those two regions will get anyone very far since they are quite a distance from the motor cortex and the related association cortices. I'm aware of the switching you mentioned from bird brain studies related to bird song output and input recognition. In the case of bird song studies, they employed intracellular microelectrodes and avoided both the latency of and signal spread associated with fMRI to see the instantaneous responses of the neurons. When I think about it, I think it's true for humans too. It's hard to discern what's being said by others while actively speaking. Sensory input for any given sense is serial and switching makes sense if the motor memory theory is valid.

I can see you are somewhat familiar with Neural Network AI by your reference to back-propagation and weighted connections. I've scrapped the traditional multilayer perceptron with back propagation and weighted connections many years ago for a number of reasons. One reason is that it is far too slow to accurately represent learning in the human brain. Fixed weights do not allow for plasticity, the dynamic nature of conductivity, and variations in trigger thresholds exhibited by real neurons. Long term potentiation appears to be the biological approach to establishing memories. Attention and focus provide one means of initiating long term potentiation while emotional states seems to serve as a means of reducing trigger thresholds, increasing duty cycle rates, and reduction of recovery times. Edelman's reentry seems applicable here too for with it, a resonant state between inputs and primed cortical outputs can be perpetuated long enough to initiate second messenger processes to activate and sustain long term memories.

Yes, fMRI studies have been done for sleeping subjects. Google the "Default Network," it should take you in the right direction. Of course, pursuit of this direction will inevitably drag you into considering such ambiguous topics as Consciousness, Sentience, and subjective experience Of course, if you are interested in real AI and emulation of the brain, you'll have to deal with all that plus a lot more. It's interesting and fun, so why not?

As for the mathematicians, they are responsible for the 20 year lag in NN based AI research, so I'd just a soon bring them in after the fact. Unfortunately, they dominate the AI community and the rest of us have to deal with pages and pages of mathematical presentations and proofs of theory whittled down to primitive levels. It makes it very difficult to conceptualize all the interacting principles found in nature and formulate a holistic view of operation. Get the working model first, then bring the mathematicians in to quantify, summarize, and optimize.

"I think I would look first in the cerebellum"

Ooh, you are a glutton for punishment :) The pyramidal neurons with 5000-20000 connections is bad enough, but you want to account for 200-300 thousand connections per neuron :) That gives new meaning to Brave New World :) LSIs typically only deal with an enable connection, maybe 64 or 128 data line busses, a current and ground bus as inputs. When each element has 200-300 thousand inputs, you need multilayer 3D architecture and something tells me that we haven't reached that level of silicon fabrication as yet. Tell me I'm wrong. I know that's the future.


message 5: by John (new)

John Brown | 52 comments My thinking was that the cerebellum is quite large, so it ought to be possible to separate out fMRI data for the left as opposed to the right hand, for example, or for the legs as opposed to the arms. (We might hope for the sort of banding that occurs with ontologies in the left temporal lobe, I think it is). Then you could train one or other muscular area during the day, and use the night-time playback to narrow down the area involved.

Pinker's hypothesis was that learning a simple table of word associations was very different from learning a rule like -ed attachment. So he expected different brain areas to be involved. He was particularly interested in this because of Chomsky's theory that we were born with some internal grammar, whose parameters were then configured by hearing the language around us. For example, one language might be ordered as Subject, Object, Verb, and another as Subject, Verb, Object.

My own feeling is that grammar rules are probably an emergent phenomenon from learning multiple collocations or "n-grams" with their frequency weightings. A word in a sentence would then be pulled to left and right in a tug-of-war, to determine the phrase it belonged in on a voting basis. As phrases grow in width, their head words still govern the attraction, but the relative values would then be modified as the distance from the attracting word changed. (Where the head word of a phrase is the last, attractions with words that preceded the phrase are weakened. And so forth.) My fairly regular reading of Linguistics does not suggest that anybody is researching this. The reason, I guess, is that they are not very good with their statistics and machine learning.


I am not familiar with Edelman's theory, but learning must involve some association between the input and the desired output. If there is a learned sequence of paired associations during the day, which is "played back" at night to adjust synapse weightings, then for experimental purposes one would want to modify the parameters of such a recording in some way, to see if the fMRI during the night would show variations that correlated with such modifications. Any practical recording system would surely suppress inactive time sequences in order to make best use of limited storage.

So time-points for the onset of muscular activity no longer correspond. Then variations in frequency, for example of finger movements in typing or playing the piano, should be a more reliable way to get a correlation. I certainly would not suggest mapping the cerebellum on a per-neuron basis. It is just that the circuitry obviously supports the building of clocked circuits that go through a sequence of different states. It seems to be common knowledge that the cerebellum is responsible for programming complex limb movements. Since we learn these only through painstaking repetition, the cerebellum would seem a good place to observe learning taking place. If anywhere in the brain is best-suited to storing a sequence of {input, desired-output} pairs, it is surely the cerebellum.

Later experiments could involve waking the subject when an identified training sequence was observed, and seeing if learning with say the left hand only was suppressed.

Quite why it seems so much harder to learn touch typing or the piano, compared to 20 new vocabulary words a day, is something I have never thought about much. Certainly from a programming point of view, making entries into a lexicon, given an apparatus that already translates phoneme sequences into mouth movements, is a lot easier than learning a sequence of movements performed by a large number of muscles.

The thought occurs to me that learning touch typing or the piano is analogous to learning the association between phonemes and mouth muscle sequences. Playing a sonata is then analogous to reading a document aloud. Both are higher level tasks that assume that accurate mappings for the low level tasks have already been built in.

I also get annoyed when I have to wade through pages of mathematical proof, only to find that some algorithmic technique has provided a speed-up of 20% or so. The mathematicians seem more interested in expressing the problem elegantly, than in solving real problems.


message 6: by Dalton (new)

Dalton Seymour | 20 comments John wrote: "My thinking was that the cerebellum is quite large, so it ought to be possible to separate out fMRI data for the left as opposed to the right hand, for example, or for the legs as opposed to the ar..."

Like the rest of the brain, the cerebellum is bilateral and therefore would reflect resources allocated to either left and right. No doubt, resource allocations are also regionalized and dedicated to musculoskeletal appendages, however, it also appears that coordinated activities are offloaded to nuclei in the spinal tract. As yet, I haven't spent a lot of time studying and considering the output side of cognition, so what I say about physiological response mechanisms/systems may not be entirely accurate or even plausible. Like, right now, you have prompted me to consider the implications of the autonomic nervous system and the role (if any) of the cerebellum in the management of such things as peristalsis, capillary beds, sphincter, and pupilary responses. Quite a lot of cerebellar resources could be allocated to unconscious homeostatic controls as well as to volitional control. There's a lot to think about with regard to grace and coordinated output that becomes habituated. Every volitional movement involves a lot of different parts of the body operating in parallel. I suspect that much of the cerebellar resources are allocated to stages of delay chains. I think it would be wise to first determine how much control for any given movement is cognitive and how much is offloaded to peripherally localized nuclei.

It's probably more true of the past than the present that commercial education relies on indoctrination, and as an educated group, it tends to bias or perspective from a foundation that largely consists of memories without a complete understanding and meaning. I suppose it's a necessary evil and essential to survival in the societies of today, but assimilation takes time. I think they called it fermentation back in the day when I was in college (probably the reason I consumed a lot of fermented beverages back then - thinking consumption was a short cut to wisdom). I believe that Chomsky's theories of an innately transferred grammar has been largely set aside these days. Not that we aren't endowed with innate neural circuits that provide for a predisposition to acquire and use language. Indeed, the ability of people to recognize spoken words when all but the formants have been deleted from the audible signal indicates that we have specialized neural circuits tailored to recognize phonemes that triggers an associative perception of the sounds associated with those words. The validity of this comes from our ability to recognize language sounds despite background noise, pitch, cadence, and variations in volume. However, the point at which those collections of formants become phonemes and words that represent concepts occurs above the innate hardwired circuitry and compliments associative linking in the association cortex. I believe that that's a growth process and is what makes learning difficult. In the case of the very young, the potential to learn fundamental concepts may already be prewired and only need be activated. The role of the association cortex in all sorts of learning is implied by the massive amount of cortical resources allocated for it. At some point, the acquisition of language involves mimicry which by habituation biases regurgitation of what has been experienced and therefore learned. Some 90% of what people say has been said before. In some ways, language is like a reflex - I love the comment someone made that "he didn't know what he thought until he said it." (I think it was Don Rumsfeld).

Edelman, a Nobel Prize winner for how the immune system employs evolutionary principles in the production of antibodies (as I understand it) subsequently took on the brain and wrote a book called Neural Darwinism. Teasingly, some refer to it a Neural Edelmanism. Instead of feedback, he prefers to call the reciprocal outputs back to originating inputs, reentry. I guess feedback implies unregulated cycling and everything would be resonating all the time.

I see your mention of synaptic weighting again and feel the need to comment. When it comes to synaptic weights or strength, I think it's a misinterpretation of observation. If it were real, there would need to be some sort of intracellar decoder and a mechanism to provide for variable output from the cell. The fact that some synapses may be larger than others could also be a response to duty cycle, and the output spiking could be nothing more than a series isolated electrocellular collapses based on internal cytoarchitecture designed to isolate one synapse from another. If you also subscribe to the notion that the neuron operates in a competitive environment which together with lateral inhibition serves to support a winner-take-all response, then you have to wonder why mother nature specified so many dendritic connections with some inhibitory and other excitory. As I see it, instead of varying synaptic strengths, mother nature has chosen lots of connections, and the cell with the most active connections after accounting for inhibitory connections wins. Such an arrangement also provides for a mechanism that can very the trigger threshold of a neuron by virtue of a inhibitory subnet. Such an inhibitory bias network would also allow the neuron to be dynamically configured for specialized situations - like panic.

The problem with fMRI at night (I assume the subject is asleep) is identifying when an fMRI patter actually represents a pattern learned during the day. Imagine the possibility that the subject was practicing swinging a bat for baseball during the day, but at night was working out frustration and resentment swinging a weapon in an altercation with an adversary. Also, due to the paralysis that sets in during sleep, the signals may never make it to the areas being scrutinized. If I'm correct in the assumption that large proportions of the cerebellar resources are stages devoted to delaying output execution, that would also muddy interpretation.

I don't see a mathematical presentation as elegant at all. The symbols employed to represent conceptual variables lack depth and meaning. A word, like a verb, brings with it background information, has context, and other synonymous representations - all of which provide meaning. Alpha, Beta, Gama, and other symbols commonly employed are substitutions without foundation. As the result, presentations that employ them are shallow and ignore the implications of variables that are seemingly superfluous, yet under certain circumstances, can loom large and be very influential. They like to manipulate those variables to cancel each other out and simplify the complexity of what they are dealing with and that makes their results valid only as a generalization. I'm getting carried away - pet peeve.


message 7: by John (new)

John Brown | 52 comments Thanks. Very interesting.
Regarding the synaptic weighting paragraph, I suppose I am biased by having designed a lot of sequential counters using JK or SR electronic bistables. At the input to each bistable, you have an AND gate with multiple inputs from other bistables, hence the input (and therefore output after the clock pulse) is dependent on previous states of the complete circuit. People write out equations like
S(t) = aS(t-1) + bS(t-2) + ...
where a,b,... are functions with a 1 or 0 for each bistable in the counter. This difference between 1 or 0 would be implemented in a neural net by activation or inhibition.

With a counter like this, you can produce any arbitrary sequence of states that you want. Such counters are used inside the CPU of a computer to initiate an action sequence from each machine-code instruction, and also inside automatic washing machines.

I understand that animal conditioning can not only associate a bell with food and therefore salivation, but that the animal can be conditioned to salivate at any arbitrary time interval after the stimulus. That has got to involve these sorts of circuitry, and Purkinje cells would seem the ideal way to do it.

The evolutionary motivation could be very basic. I understand that new-born slugs eat a bit of everything in the environment, and learn to avoid what makes them sick. Presumably different poisons take variable time intervals to make them sick, so learning must be arranged to associate a response to a stimulus at an arbitrarily earlier time-point.

This argument can be extended to explain the phenomenon of context in reading, or in remembering things (the divers who cannot remember articles seen underwater, once they surface). Establishing context would involve extra activating signals to the Purkinje cells when a particular context was recognised. I remember the "he's got a shoe/he's going to shoot" example.

In electronic neural nets, there is always some kind of threshold function, so that the neuron only fires when the addition of the various inputs (some of which are negated) crosses a threshold. Then the size of the output is constant on each occasion. Obviously biological neurons can carry additional signals in their pulse frequency or even pulse duration, and that might allow for a forward-direction of implementation of a learning algorithm which normally involves backward-propagation in electronic neural nets. Digital Differential Analysers were a kind of computer where amplitude was encoded as the number of pulses accumulated. They were used in aircraft for dead-reckoning, since they could provide any desired resolution, in contrast to the analogue computers of the time which managed only about +/0.1%.

Rumelhart and McClelland were the most prolific publishers in Neural Nets. They have stated that every kind of binary logic circuit has now been simulated in a neural net. I have not checked this in the literature, but why would they lie?

It follows that any sort of state equation could be implemented in a neural net.


message 8: by Dalton (new)

Dalton Seymour | 20 comments John wrote: "Thanks. Very interesting.
Regarding the synaptic weighting paragraph, I suppose I am biased by having designed a lot of sequential counters using JK or SR electronic bistables. At the input to each..."


In days of old when knights were bold, I dabbled in electronics (7400 series), but then the world flipped topsy-turvey into the digital revolution and I was dragged kicking and screaming into computing and left electronics behind (with resentment). What did it for me was that Dialog went commercial when NASA fired up their own network for research purposes discovering that my past time for doing secondary research on topics of interest could be done from my study and that I would no longer have to travel hundreds of miles to library repositories. Today, I now realize that there can be no real intelligence without sentience and in order to emulate subjective experience, there has to be some sort of feedback from a physical system - Robotics. Have you ever dabbled in robotics? It makes me wish I still had my workshops from back in the 80's - oh well. Anyway, I'm sort of familiar with gates and gated arrays and I believe that a lot of the neurons at intermediate stages in the conduction of sensory signaling are guided to their cortical destinations through gated arrays biased by multiple subnet biasing. I believe those subnets are the key to understanding the distributed nature of memory for by virtue of the intermediate stage biasing of conductive paths, they serve to slice and dice the cortical resources into categorical clusters of domains, themes, concepts, and associations. Somewhere I ran across the notion of Euclidean Distance as a measure of how closely one thing is related to another. I don't believe it is entirely accurate to characterize the allocation of resources that way since it doesn't provide for a temporal dimension nor the emotional dimensions, but it's a nudge in the right direction. For one thing, it can explain where context tracking comes from and an easy manner in which to switch domains by a simple inhibition of active gates letting an alternate competitor to become dominant in a related domain. Such a scheme also explains how to resolve irony. I don't think you'll find context management in the cerebellar resources.

I'm quite familiar with the MIT Whiz Kids Rumelhart and McClelland and own MITs Neurocomputing vol 1 & 2 among others and collected papers. Yes, after the Perceptron fiasco prompted by insight limited to a mathematical perspective, it has been shown that neurons can perform all the logical functions employed in computer electronics, but, I don't think the brain actually employs all those human contrivances. The brain is like a hybrid computer, part digital, part analog, part serial, and part parallel. There doesn't seem to be a central clocking mechanism that would provide for the level of a time and may depend on entrainment for synchronization (Wells). I know Circadian had to pop into your mind, but they are far too slow. There may be localized banks of neurons configured as oscillators, but they have to be dynamic in order to account for time dilation experiences and that would require the consumption of a lot of resources for a subnet dedicated to control. It appears to be tied more closely to metabolic rates and trigger thresholds influenced by emotion and hormones. So, the brain appears to operate with variable levels of performance and emotion appears to be the throttle. I see my new computer does that too. One of the things that came out of Rumelhart and McClelland that I gravitated too was their notion of "State Space." It inherently embodies a concept with depth.

In the case of Pavlov's dog, one can call that a form of learning, however I view it more like an adaptive reflex to since the response is unconsciously provoked. Also, there appears to be a critical period in which the associated stimulus must occur to have it become associated with the event in conditioned responses. Such an experience and response combination as Pavlov's dog is multimodal, and while thinking about it, I wonder if anyone has ever done such a conditioning experiment employing only one modality to generate a specific response. I'm thinking of a visual even presented without other modal stimuli and wondering what the time constant would be for making such associations. Not all modalities operate at the same frame rates. Tactile may have considerable delay compared to vision. I suspect that audition is closer, but still perceptually slower.

The slugs eating everything insight is akin to toddlers putting everything in their mouths. This may actually be a natural phenomena related to the establishment of the gut microbiome and an integral part of the immune system. Lately, the microbiome has become very fashionable in the world of biological science. Having been a microbiology major, I tend to believe and agree with this trend and its importance to health. Unfortunately, there are so many claims being made, its beginning to sound like snake oil. Long ago, I read up on pathology and when I got done, I wondered why everybody wasn't dead. The I read up on immunology, and wondered why anyone every got sick. The sum total of which was the realization that there's a tenacious ballance between the two states. Perhaps the micobiome's influence tips the scale in favor of survival.


message 9: by John (last edited Mar 08, 2014 08:35AM) (new)

John Brown | 52 comments Unfortunately I am too old for robotics now, and my vector algebra is not good enough, even though I do know a bit of hydraulics.
Instead I do information extraction from text, and use Euclidean distance a lot in that. Text Analytics is a nice self-contained field that I can do at home without expensive facilities (the Web is a huge source of test data, as well as being an application area).
But if you look at the conversations in the Text Analytics group on LinkedIn, brain behaviour is never mentioned. Most Linguists never consider it much either, and I think both groups are missing out.
Context dependency occurs a lot in Text Analytics (it is much more widespread than the popular "bank" example), but the problems seem to yield quite well to the simple detection of sets of context-defining words that decay in their effect over about 25 words.
I remember once giving a blood donation, when the nurse appeared a little bit full of Christmas cheer. She was about to withdraw the needle from my arm, when her attention was deflected by a colleague. Without looking back, she yanked the needle painfully out of my arm and over the next two hours I got a big collection of blood below the skin and eventually was black and blue for several weeks. She said "I must have pulled a hair out". Clearly her brain had skipped the needle-withdrawal stage in the sequence and jumped to the sticky-plaster-removal stage. Lose context, and all sorts of nasty things can happen, as with mobile phones whilst driving.
But the Text analysis experience suggests that the mechanism is fairly simple. Ontology priming/tagging seems to be the most general solution, although the ontology I use (Wordnet) is not comprehensive enough to support this approach.

Oh, it just occurred to me that you could do without clocks if signal spikes of two amplitudes were treated: the big one is a data change with clock pulse, and the small one acts as a clock pulse without data change. Or else two successive spikes for data-change, and just one for clock. I think action potentials show this sort of behaviour. The only snag would be that the settling time would rise with the number of chained cells. But since this is approximately
log2(m) where m is the number of states, then for slow muscular movements there should be no problems. With fast ones, for example using the tongue, this might be significant. We could view phonetic elision as a mechanism for shortening such sequences.


message 10: by Dalton (new)

Dalton Seymour | 20 comments The need for hydraulics, pneumatics, and electric motors in robotics has now been displaced by artificial muscles that can be done CHEAPLY. You may, or may not be aware of the development of cheap muscle fabrications using twisted polyethylene fishing line and nichrome wire to fabricate artificial muscle fibers. One source claims "Extreme twisting produces coiled muscles that can contract by 49%, lift loads over 100 times heavier than can human muscle of the same length and weight, and generate 5.3 kilowatts of mechanical work per kilogram of muscle weight, similar to that produced by a jet engine. Apparently real muscle will only contract by 20%. Science Friday had a nice interview in conjunction with this and YouTube has a nice video demonstrating it (http://www.youtube.com/watch?v=ZDiHb1...). If you are not familiar with this development, have a quick peak at the video. Any hobbyist could fabricate this type of artificial muscle fiber and it can be done really cheap.

Combined with reed relays to handle the required voltage and current to heat the wire, every strand could be addressed and the strands could be ganged together to emulate real muscles. You could then emulate the spinal nuclei and cerebellum. This, combined with the artificial skin developed at Stanford (I think) sets the stage for the android (once the AI is up to the task). I think robotics is poised to become a reality. I'm not so much interested in the robotics end of it as I am with the mind/body connection and feedback to emulate subjective experience.

I hadn't thought about the potential of spiking to signal the presentation of data. As you said, it wouldn't scale well and as I think about it, it could become cumbersome to implement in a multidimensional set of networks, all acting independently. The noise factor would also climb a lot.


back to top