Artificial Intelligence That Uses Less Energy By Mimicking The Human Brain
Texas A&M University engineers work to create “Super-Turing AI,” which operates more efficiently by learning on the fly.

The Texas A&M University Department of Electrical and Computer Engineering is at the forefront of advancing machine learning theory and applications.
Artificial Intelligence (AI) can perform complex calculations and analyze data faster than any human, but to do so requires enormous amounts of energy. The human brain is also an incredibly powerful computer, yet it consumes very little energy.
As technology companies increasingly expand, a new approach to AI’s “thinking,” developed by researchers including Texas A&M University engineers, mimics the human brain and has the potential to revolutionize the AI industry.
Dr. Suin Yi, assistant professor of electrical and computer engineering at Texas A&M’s College of Engineering, is on a team of researchers that developed “Super-Turing AI,” which operates more like the human brain. This new AI integrates certain processes instead of separating them and then migrating huge amounts of data like current systems do.
The “Turing” in the system’s name refers to AI pioneer Alan Turing, whose theoretical work during the mid-20th century has become the backbone of computing, AI and cryptography. Today, the highest honor in computer sciences is called the Turing Award.
The team published its findings in Science Advances.
The Energy Crisis In AI

Data center adjacent to single-family homes in Stone Ridge, Virginia.
Today’s AI systems, including large language models such as OpenAI and ChatGPT, require immense computing power and are housed in expansive data centers that consume vast amounts of electricity.
“These data centers are consuming power in gigawatts, whereas our brain consumes 20 watts,” Suin explained. “That’s 1 billion watts compared to just 20. Data centers that are consuming this energy are not sustainable with current computing methods. So while AI’s abilities are remarkable, the hardware and power generation needed to sustain it is still needed.”
The substantial energy demands not only escalate operational costs but also raise environmental concerns, given the carbon footprint associated with large-scale data centers. As AI becomes more integrated, addressing its sustainability becomes increasingly critical.
Emulating The Brain
Yi and team believe the key to solving this problem lies in nature — specifically, the human brain’s neural processes.
In the brain, the functions of learning and memory are not separated, they are integrated. Learning and memory rely on connections between neurons, called “synapses,” where signals are transmitted. Learning strengthens or weakens synaptic connections through a process called “synaptic plasticity,” forming new circuits and altering existing ones to store and retrieve information.

In the brain, learning and memory rely on connections between neurons, called “synapses,” where signals are transmitted.
By contrast, in current computing systems, training (how the AI is taught) and memory (data storage) happen in two separate places within the computer hardware. Super-Turing AI is revolutionary because it bridges this efficiency gap, so the computer doesn’t have to migrate enormous amounts of data from one part of its hardware to another.
“Traditional AI models rely heavily on backpropagation — a method used to adjust neural networks during training,” Yi said. “While effective, backpropagation is not biologically plausible and is computationally intensive.
“What we did in that paper is troubleshoot the biological implausibility present in prevailing machine learning algorithms,” he said. “Our team explores mechanisms like Hebbian learning and spike-timing-dependent plasticity — processes that help neurons strengthen connections in a way that mimics how real brains learn.”
Hebbian learning principles are often summarized as “cells that fire together, wire together.” This approach aligns more closely with how neurons in the brain strengthen their connections based on activity patterns. By integrating such biologically inspired mechanisms, the team aims to develop AI systems that require less computational power without compromising performance.
In a test, a circuit using these components helped a drone navigate a complex environment — without prior training — learning and adapting on the fly. This approach was faster, more efficient and used less energy than traditional AI.
Why This Matters For The Future Of AI
This research could be a game-changer for the AI industry. Companies are racing to build larger and more powerful AI models, but their ability to scale is limited by hardware and energy constraints. In some cases, new AI applications require building entire new data centers, further increasing environmental and economic costs.
Yi emphasizes that innovation in hardware is just as crucial as advancements in AI systems themselves. “Many people say AI is just a software thing, but without computing hardware, AI cannot exist,” he said.

Super-Turing AI could reshape how AI is built and used, ensuring that as it continues to advance, it does so in a way that benefits both people and the planet.
Looking Ahead: Sustainable AI Development
Super-Turing AI represents a pivotal step toward sustainable AI development. By reimagining AI architectures to mirror the efficiency of the human brain, the industry can address both economic and environmental challenges.
Yi and his team hope that their research will lead to a new generation of AI that is both smarter and more efficient.
“Modern AI like ChatGPT is awesome, but it’s too expensive. We’re going to make sustainable AI,” Yi said. “Super-Turing AI could reshape how AI is built and used, ensuring that as it continues to advance, it does so in a way that benefits both people and the planet.”
Learn about the Texas A&M Department of Computer and Electrical Engineering here.
Credits
HfZrO-based synaptic resistor circuit for a Super-Turing intelligent system
Jungmin Lee, University of California-Los Angeles
Rahul Shenoy, University of California-Los Angeles
Atharva Deo, University of California-Los Angeles
Suin Yi, Texas A&M University
Dawei Gao, University of California-Los Angeles
David Qiao, University of California-Los Angeles
Mingjie Xu, University of California-Irvine
Shiva Asapu, University of Massachusetts-Amherst
Zixuan Rong, University of California-Los Angeles
Dhurva Nathan, University of California-Los Angeles
Yong Hei, University of California-Los Angeles
Dharma Paladugu, Texas A&M University
Jian-Guo Zheng, University of California-Irvine
J. Joshua Yang, University of California-Irvine
R. Stanley Williams, Texas A&M University
Qing Wu, Air Force Research Lab, Information Directorate
Yong Chen, University of California-Los Angeles
This research was funded by The National Science Foundation and the Air Force Office of Scientific Research.