In the Weeds: Theater of the Mind

Think about the last time you were engrossed in a story a friend or relative was telling – don’t think about the story, think about your own thoughts and feelings.

You were transported from your physical location in a lawn chair or at the kitchen table or standing under a shade tree. The visual regions of your mind began to draw up scenes to accompany the words you were hearing, and not just the words, but the tone, the pace, the drama of the speaker. You became an imaginary witness to the narrative being told.

You were experiencing the “theater of the mind.”

When Charles Bonnet first coined the phrase and studied the issue in the 1700s, his focus was on the strange fact that visually-impaired people, in his case, his grandfather, sometimes experience hallucinations. There is even a syndrome named after Charles Bonnet. Yet, as is often the case when studying the brain, insights into this unusual phenomenon actually provide a window into how our mind works.

Humans are narrative animals. We tell stories, are enticed by stories, listen carefully to stories, and learn from stories. We remember stories, much more organically than we remember “facts.”

Pioneering psychologist William James said, “the mind is at every stage a theater of simultaneous possibilities.” He was exploring the ways we provide structure to this limitless scene-setting, how we choose each mental image and how we tie them together.

Using technology, this phenomenon is now under careful study and explanations are beginning to appear.

In a January 1, 2012, article at Scientific American, we learn that functional magnetic resonance imaging (fMRI) is being used to accomplish “a primitive form of mind reading.”

The details of the study are fascinating. According to the article, Jack L. Gallant at UC-Berkeley had volunteers look at thousands of images while they were being scanned in an MRI unit. The images were analyzed using a computer program which broke down activity to small clusters of neurons in the brain. As the database built, the program “learned” which cell clusters were responding to which images. After gathering the information, Gallant “inverted” the program. The MRIs of volunteers were fed into the computer, which then predicted what the image looked like.

“These reconstructions are surprisingly good, even though they are based on the smudged activity of hundreds of thousands of highly diverse nerve cells, each one firing to different aspects of the image—its local intensity, color, shading, texture, and so on,” the article states.

Impressive, right? Well, Gallant didn’t stop there. He bumped it up a notch by moving on from images to scenes from movies. By combining the next group of volunteer brain scans with a database of YouTube videos, Gallant’s computer was taught both what a “scene” might entail and how the volunteers’ neurons react as the movie clips unfold. Here’s Gallant demonstrating this work:

Incredible. Maybe a little creepy, but incredible.

Another researcher in the Gallant lab, Alan Huth, has been studying how the brain lights up while listening to – you guessed it! - stories. In these studies, volunteers lie back in the MRI and listen to stories from The Moth Radio Hour. His team has developed a model, using similar algorithms to those used by Gallant’s visual cortex group, which compares the words in the stories with indications of neural activity in the MRI. The model can predict a “word cloud” associated with many neuron groups across the brain. This presentation provides a great overview of his work.

The next step, of course, will be to scan volunteers while they listen to stories, then analyze their visual cortex readings based on the algorithm which “predicts” what they are seeing. Sharpen up that technology, and someday you’ll be able to sit down and watch the theater of someone else’s mind!

Think happy thoughts … and keep telling stories!

Posted in In the Weeds - A Deeper Look at Stories and tagged , , , .

Leave a Reply

Your email address will not be published. Required fields are marked *