Motor areas of the brain are well suited to sequencing: muscles make one motion, then another, then another, and so on to accomplish motor tasks, and the organism usually doesn’t pay much attention to working out the finer details consciously. Now, as reported at the 2012 Neuroscience Meeting, a researcher at Georgetown has found that processing familiar and unfamiliar melodies involve different gyri in the cortex (and the motor cortex only for familiar tunes). This amplifies and refines our knowledge, for some years now already, of the coupling between auditory and motor systems for speech and music perception.
Could it have something to do with “sensory programming” (some people are kinesthetic, some people are visual, some people are auditory in the way they categorize and recall memories)? Musicians use muscle memory, obviously. While their output is auditory, they approach learning their craft (or memorize compositions) by means of tactile sensations–and that information would seem to point to muscle memory-style processing.
This research underscores the feasibility of non-verbal interfaces. Motor areas may not be good for understanding complex concepts, but can handle simpler recognition and sequential storage tasks in support of higher cognition. Thus tactile or haptic interfaces could possibly be employed concurrently with abstract cognition.
In a TED Talk, Michael Tilson Thomas (now, among other appointments, the artistic director of the YouTube Symphony Orchestra) spoke of the emotional impact of (and its manipulation by) classical music (touching on, without mentioning, the insights of Hindemith).
At 15:00, Thomas poses a question with neuropsychological implications: What happens when the music stops?
To Thomas, it is the “intimate, personal side of music;” but it does pose the neuropsychological question of how a musical engram is configured synaptically: memory, cognition, cortex, hypothalamus, hippocampus, cingulate gyrus. It could be that music’s “power” comes from involving so many parts of the brain, probably more than text does–and probably there is research on this. How these would map to a relationship between cognition (if perception and appreciation of music could be adequately described by that single word) and memory.
Article in Slashdot (“news for nerds, stuff that matters”): You really are what you know: “There has been research for some time showing that London cab driver brains differ from other people’s, with considerable enlargement of those areas dealing with spacial relationships and navigation. Follow-up work showed it wasn’t simply a product of driving a lot (PDF). However, up until now it has been disputed as to whether the brain structure led people to become London cabbies or whether the brain structure changed as a result of their intensive training (which requires rote memorization of essentially the entire street map of one of the largest and least-organized cities in the world). Well, this latest study answers that. MRI scans before and after the training show that the regions of the brain substantially grow as a result of the training, and they’re quite normal beforehand. The practical upshot of this research is that — even for adult brains, which aren’t supposed to change much — what you learn structurally changes your brain. Significantly.”
The first comment is worth following the link to the main article.
MIT researchers, as reported through CogNet, have devised a multi-transistor chip that re-creates neural activity through opening and closing of ion channels creating a gradient like a neuronal action potential. Action potentials have been simulated before, but this artificial creation of one is the key discovery. It should bring a whole new dimension to devices based on neural nets. The next step probably is scaling up to many multiples of the one synapse. Higher cognition is based on thousands of such connections, and large numbers of these silicon emulations would be necessary to address most cognitive neuroscience questions relevant to higher endeavors.