These 800,000 Neurons in a Dish Learned to Play Pong in Just Five Minutes

Scientists just taught hundreds of thousands of neurons in a dish to play Pong. Using a series of strategically timed and placed electrical zaps, the neurons not only learned the game in a virtual environment, but played better over time—with longer rallies and fewer misses—showing a level of adaptation previously thought impossible.

Why? Picture literally taking a chunk of brain tissue, digesting it down to individual neurons and other brain cells, dumping them (gently) onto a plate, and now being able to teach them, outside a living host, to respond and adapt to a new task using electrical zaps alone.

It’s not just fun and games. The biological neural network joins its artificial cousin, DeepMind’s deep learning algorithms, in a growing pantheon of attempts at deconstructing, reconstructing, and one day mastering a sort of general “intelligence” based on the human brain.

The brainchild of Australian company Cortical Labs, the entire setup, dubbed DishBrain, is the “first real-time synthetic biological intelligence platform,” according to the authors of a paper published this month in Neuron. The setup, smaller than a dessert plate, is extremely sleek. It hooks up isolated neurons with chips that can both record the cells’ electrical activity and trigger precise zaps to alter those activities. Similar to brain-machine interfaces, the chips are controlled with sophisticated computer programs, without any human input.

The chips act as a bridge for neurons to link to a virtual world. As a translator for neural activity, they can unite biological electrical data with silicon bits, allowing neurons to respond to a digital game world.

DishBrain is set up to expand to further games and tests. Because the neurons can sense and adapt to the environment and output their results to a computer, they could be used as part of drug screening tests. They could also help neuroscientists better decipher how the brain organizes its activity and learns, and inspire new machine learning methods.

But the ultimate goal, explained Dr. Brett Kagan, chief scientific officer at Cortical Labs, is to help harness the inherent intelligence of living neurons for their superior computing power and low energy consumption. In other words, compared to neuromorphic hardware that mimics neural computation, why not just use the real thing?

“Theoretically, generalized SBI [synthetic biological intelligence] may arrive before artificial general intelligence (AGI) due to the inherent efficiency and evolutionary advantage of biological systems,” the authors wrote in their paper.

Meet DishBrain

The DishBrain project started with a simple idea: neurons are incredibly intelligent and adaptable computing machines. Recent studies suggest that each neuron is a supercomputer in itself, with branches once thought passive acting as independent mini-computers. Like people within a community, neurons also have an inherent ability to hook up to diverse neural networks, which dynamically shifts with their environment.

This level of parallel, low-energy computation has long been the inspiration for neuromorphic chips and machine learning algorithms to mimic the natural abilities of the brain. While both have made strides, none have been able to recreate the complexity of a biological neural network.

“From worms to flies to humans, neurons are the starting block for generalized intelligence. So the question was, can we interact with neurons in a way to harness that inherent intelligence?” said Kagan.

Enter DishBrain. Despite its name, the plated neurons and other brain cells are from an actual brain with consciousness. As for “intelligence,” the authors define it as the ability to gather information, collate the data, and adjust firing activity—that is, how neurons process the data—in a way that helps adapt towards a goal; for example, rapidly learning to place your hand on the handle of a piping hot pan without searing it on the rim.

The setup starts, true to its name, with a dish. The bottom of each one is covered with a computer chip, HD-MEA, that can record from stimulated electrical signals. Cells, either isolated from the cortex of mouse embryos or derived from human cells, are then laid on top. The dish is bathed in a nutritious fluid for the neurons to grow and thrive. As they mature, they grow from jiggly blobs into spindly shapes with vast networks of sinuous, interweaving branches.

Within two weeks, the neurons from mice self-organized into networks inside their tiny homes, bursting with spontaneous activity. Neurons from human origins—skin cells or other brain cells—took a bit longer, establishing networks in roughly a month or two.

Then came the training. Each chip was controlled by commercially available software, linking it to a computer interface. Using the system to stimulate neurons is similar to providing sensory data—like those coming from your eyes as you focus on a moving ball. Recording the neurons’ activity is the outcome—that is, how they would react to (if inside a body) you moving your hand to hit the ball. DishBrain was designed so that the two parts integrated in real time: similar to humans playing Pong, the neurons could in theory learn from past misses and adapt their behavior to hit the virtual “ball.”

Ready Player DishBrain

Here’s how Pong goes. A ball bounces rapidly across the screen, and the player can slide a tiny vertical paddle—which looks like a bold line—up and down. Here, the “ball” is represented by electrical zaps based on its location on the screen. This essentially translates visual information into electrical data for the biological neural network to process.

The authors then defined distinct regions of the chip for “sensation” and “movements.” One region, for example, captures incoming data from the virtual ball movement. A part of the “motor region” then controls the virtual paddle to move up, whereas another causes it to move down. These assignments were arbitrary, the authors explained, meaning that the neurons within needed to adjust their firings to excel at a match.

So how do they learn? If the neurons “hit” the ball—that is, showing the corresponding type of electrical activity—the team then zapped them at that location with the same frequency each time. It’s a bit like establishing a “habit” for the neurons. If they missed the ball, then they were zapped with electrical noise that disrupted the neural network.

The strategy is based on a learning theory called the free energy principle, explained Kagan. Basically, it supposes that neurons hold “beliefs” about their surroundings, and adjust and repeat their electrical activity so they can better predict the environment, either changing their “beliefs” or their behavior.

The theory panned out. In just five minutes, both human and mice neurons rapidly improved their gameplay, including better rallies, fewer aces—where the paddle failed to intercept the ball without a single hit—and long gameplays with more than three consecutive hits. Surprisingly, mice neurons learned faster, though eventually they were outperformed by human ones.

The stimulations were critical for their learning. Separate experiments with DishBrain without any electrical feedback performed far worse.

Game On

The study is a proof of concept that neurons in a dish can be a sophisticated learning machine, and even exhibit signs of sentience and intelligence, said Kagan. That’s not to say they’re conscious—rather, they have the ability to adapt to a goal when “embodied” into a virtual environment.

Cortical Labs isn’t the first to test the boundaries of the data processing power of isolated neurons. Back in 2008, Dr. Steve Potter at the Georgia Institute of Technology and team found that with even just a few dozen electrodes, they could stimulate rat neurons to exhibit signs of learning in a dish.

DishBrain has a leg up with thousands of electrodes compacted in each setup, and the company hopes to tap into its biological power to aid drug development. The system, or its future derivations, could potentially act as a micro-brain surrogate for testing neurological drugs, or gaining insights into the neurocomputation powers of different species or brain regions.

But the long-term vision is a “living” bio-silicon computer hybrid. “Integrating neurons into digital systems may enable performance infeasible with silicon alone,” the authors wrote. Kagan imagines developing “biological processing units” that weave together the best of both worlds for more efficient computation—and in the process, shed a light on the inner workings of our own minds.

“This is the start of a new frontier in understanding intelligence,” said Kagan. “It touches on the fundamental aspects of not only what it means to be human, but what it means to be alive and intelligent at all, to process information and be sentient in an ever-changing, dynamic world.”

Image Credit: Cortical Labs

Source link