Today’s artificial intelligence (AI) reads, talks, and processes vast amounts of data, aiding in business decision-making and more. Despite these impressive capabilities, AI still faces significant brain-related limitations.
Modern AI technologies, like ChatGPT, interact with the physical world in limited ways. They require billions of training examples to perform tasks such as solving math problems or writing essays effectively.
Cold Spring Harbor Laboratory (CSHL) NeuroAI Scholar Kyle Daruwalla has been exploring unconventional methods to overcome these computational obstacles.
Kyle Daruwalla identified that modern computing’s high energy consumption is largely due to data movement. In artificial neural networks, data frequently travels long distances, consuming substantial amounts of energy.
This inefficiency motivated Daruwalla to seek inspiration from one of the most efficient computational systems known: the human brain.
Unlike artificial systems, the human brain processes information with remarkable energy efficiency. Its neurons and synapses manage data locally, reducing the need for extensive data transfer. By studying these biological processes, Daruwalla developed a new method for AI algorithms to move and process data more efficiently.
His approach mimics the brain’s strategy, allowing individual AI “neurons” to receive feedback and adjust in real-time. This innovation not only addresses the issue of energy consumption but also enhances the responsiveness and learning capabilities of AI systems.
“In our brains, our connections are changing and adjusting all the time,” noted Daruwalla. “It’s not like you pause everything, adjust, and then resume being you.”
This innovative machine-learning model supports an unproven theory which links working memory with learning and academic performance.
Working memory, the cognitive system enabling us to stay on task while recalling stored knowledge, plays a crucial role in learning.
“There have been theories in neuroscience about how working memory circuits could facilitate learning. But there wasn’t a concrete rule tying these concepts together until now. Adjusting each synapse individually required working memory alongside it,” said Daruwalla.
The innovative design could pioneer a new generation of AI that learns more like humans, making the technology more efficient and accessible.
This development represents a significant step forward for neuroAI, a field where neuroscience has long provided valuable data to artificial intelligence methods like machine learning. Soon, it seems, AI may return the favor.
The new brain-inspired AI design offers several advantages. One major benefit is its increased energy efficiency, reducing operational costs and environmental impact. This makes AI applications more sustainable and accessible.
The real-time adjustment capability enhances responsiveness, allowing AI systems to adapt quickly to new information and changing conditions, which is crucial for dynamic environments like autonomous vehicles or smart cities.
This approach also improves the scalability of AI systems, as efficient data processing reduces the need for extensive computational resources.
Furthermore, the model’s ability to mimic human learning processes can lead to more intuitive and user-friendly AI interfaces, improving user experience and adoption rates.
Enhanced learning capabilities enable AI to perform complex tasks with greater accuracy, which is particularly beneficial in fields requiring high precision, such as healthcare and finance.
Overall, these advantages position the new AI design as a transformative technology with broad applicability and significant impact.
The new brain-inspired AI design has a range of potential applications across various fields. In healthcare, it can enhance diagnostic tools, enabling more accurate and efficient data analysis for medical imaging and patient records.
In education, AI systems can provide personalized learning experiences, adapting in real-time to students’ needs and improving academic outcomes.
In robotics, the improved energy efficiency and real-time adjustments can lead to more autonomous and adaptive robots, capable of performing complex tasks with minimal human intervention.
The technology can also benefit environmental monitoring by processing vast amounts of ecological data more efficiently, leading to better insights and quicker responses to environmental changes.
Additionally, in finance, this AI model could improve algorithmic trading and risk assessment by quickly adapting to market fluctuations. Overall, this innovative AI approach opens up new possibilities for smarter, more responsive, and energy-efficient applications across diverse industries.
The study is published in the journal Frontiers in Computational Neuroscience.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–