As brain imaging devices continue to develop, we get better resolution on how memes are physically represented in the brain. Neuroscientists from UCLA found that pictures of Halle Berry, images of her as Catwoman, and a letter sequence spelling H-A-L-L-E-B-E-R-R-Y, all activated a single neuron in the brain. These multi-modal neurons respond to clusters of abstract concepts centered around a common high-level theme: a near definition of a meme.
As computing power increases exponentially in line with Moore’s law, the fast-developing field of artificial intelligence promises to teach us more about how our own brain works. There is evidence of similar multi-modal neurons to the ‘Halle Berry’ neuron forming in artificial neural networks like OpenAI’s ‘CLIP’. For example the ‘Spider Man’ neuron, which responds to photos, drawings, and text related to Spider Man. They found scores of other interpretable neurons, relating to concepts as diverse as ‘winter’, ‘roman art’ and ‘USA’,
If we can probe real and artificial brains to see what neurons fire when presented with specific memes, we can reverse engineer the process. If that neuron fires when presented with other stimuli, that’s a strong hint that it’s related, and we can discover surprising associations we wouldn’t have found without memetic theory. If you can classify something, you’re half way to generating it. Neural networks that can recognize Spider Man memes can also create them. Not just replicas but novel additions that haven’t been seen before. Imagine the ability to spot new trends and instantly contribute to them, at the touch of a button.