As brain imaging devices continue to develop, we get better resolution on how memes are physically represented in the brain. Neuroscientists from UCLA found that pictures of Halle Berry, images of her as Catwoman, and a letter sequence spelling H-A-L-L-E-B-E-R-R-Y, all activated a single neuron in the brain. They found similar cases like a ‘Sydney Opera House’ neuron matching to both images of the famous landmark, and the string of text “Sydney Opera”. Different participants had different neural structures, that fired based on different stimulus, presumably based on that participant’s past experience and memory of the people, places, or things presented. These multi-modal neurons respond to clusters of abstract concepts centered around a common high-level theme, encoded by experience: a near definition of a meme.
As computing power increases exponentially in line with Moore’s law, the fast-developing field of artificial intelligence promises to teach us more about how our own brain works. There is evidence of similar multi-modal neurons to the ‘Halle Berry’ neuron forming in artificial neural networks like OpenAI’s ‘CLIP’. For example the ‘Spider Man’ neuron, which responds to photos, drawings, and text related to Spider Man. They found scores of other interpretable neurons, relating to concepts as diverse as ‘winter’, ‘roman art’ and ‘USA’. What we’re seeing are the representations in CLIP’s ‘brain’ that are then used to power the creation of images by OpenAI’s DALL-E generative AI. As research progresses we’re finding more similarities than differences between how our brain works and how computers work. "Conventional wisdom views individual brain cells as simple switches or relays. In fact, we are finding that neurons are able to function more like a sophisticated computer” explains Christof Koch, a professor of Cognitive and Behavioral Biology at Caltech.
If we can probe real and artificial brains to see what neurons fire when presented with specific memes, we can reverse engineer the process. If that neuron fires when presented with other stimuli, that’s a strong hint that it’s related, and we can discover surprising associations we wouldn’t have found without memetic theory. If you can classify something, you’re half way to generating it, as we see in practice with OpenAI’s CLIP and DALL-E. Neural networks that can recognize Spider Man memes can also create them. Not just replicas but novel additions that haven’t been seen before, in new situations or paired with new styles or concepts. Imagine the ability to spot new trends being encoded in the ‘noosphere’ – the collective human intelligence – and instantly being able to contribute to them, at the touch of a button. As AI art and text generation gain ground, the cost of creation will rapidly collapse, and the new bottleneck will be figuring out what to create. Gaining that level of understanding about how our brain works gives the ultimate money printing machine: the ability to predict what people want.
Learning how AI thinks might ultimately become more important than learning how our brains think, if we plan to outsource most of our work to AI. However the more we do that, the less in control of our world we become as humans, putting us at risk of handing a new competing intelligence the keys to the kingdom. Motivated by this concern, Brain Machine Interfaces (BMIs) are under rapid development, and are now at the level where they can identify what words a participant was thinking about with 91% accuracy. We can expect these devices to get more accurate, as well as less invasive / bulky over time, expanding exponentially the observable data we will have on how the brain works in different situations. Many in the field see this as a race against time vs developments in AI, because without being able to interface directly with computers, our brains will quickly get out-thought by them. Perhaps the only way to ensure we’re not overrun with AI is to become part AI ourselves. As Elon Musk’s BMI startup Neuralink has as their mission statement: "if you can't beat em, join em”.
Name | Link | Type |
---|---|---|
If you can’t beat em, join em | Social | |
Multimodal Neurons in Artificial Neural Networks | Paper | |
Neuralink and the Brain’s Magical Future | Blog | |
Online internal speech decoding from single neurons in a human participant | Paper | |
Single-Cell Recognition: A Halle Berry Brain Cell | Paper |