Article image
11-21-2022

Artificial neural networks learn better when they “sleep”

Depending on their age, humans usually need to sleep from seven to 13 hours per day – a period when many things happen: heart rate, breathing, and metabolism ebb and flow, hormone levels re-adjust, and the body relaxes. The brain, however, is much more active than other parts of our organism when we sleep, repeating what we have learned during the day, re-organizing memories in a more efficient way, building rational memory – or the ability to remember arbitrary or indirect associations between objects, people, and events – and protecting against forgetting old memories.

Recently, artificial neural networks build upon the architecture of the human brain in order to improve various systems and technologies, from basic science to finance or social media. While in some ways they have achieved superhuman performances, such as extreme computational speed, these machines currently fail in one key aspect: when they learn sequentially, new information overwrites previous information, a process called “catastrophic forgetting.”

“In contrast, the human brain learns continuously and incorporates new data into existing knowledge, and it typically learns best when new training is interleaved with periods of sleep for memory consolidation,” said study senior author Maxim Bazhenov, a professor of Medicine and sleep expert at the University of California, San Diego.

According to the researchers, biological models may help mitigate the danger of catastrophic forgetting in artificial neural networks, thus boosting their utility across a wide spectrum of interests. In order to do this, Bazhenov and his team used spiking neural networks that artificially mimic natural neural systems: instead of information being communicated continuously, it is transmitted as discrete events – or spikes – at specific time points.

Their experiments revealed that, when the spiking networks were trained to perform a new task, but with occasional off-life periods mimicking sleep, catastrophic forgetting was mitigated. Thus, just like for the human brain, “sleep” for the networks allowed them to replay old memories without explicitly using old training data.

“When we learn new information, neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It’s called reactivation or replay,” Bazhenov explained.

“Synaptic plasticity, the capacity to be altered or molded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”

When the scientists applied this approach to artificial neural networks, they found that it plays a significant role in helping them avoid catastrophic forgetting. “It meant that these networks could learn continuously, like humans or animals. Understanding how the human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory.”

In future projects, the researchers aim to use computer models to design optimal strategies to apply stimulation during sleep, such as auditory tones which enhance sleep rhythms and improve learning. This could turn out to be very important in cases when memory is non-optimal, such as when it declines with aging or in some health conditions like Alzheimer’s disease.

The study is published in the journal PLoS Computational Biology.

By Andrei Ionescu, Earth.com Staff Writer

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

News coming your way
The biggest news about our planet delivered to you each day
Subscribe