
Can a brain with a limited ability to perform precise computations compete with an AI system running on a fast parallel computer? Yes, in many tasks, as everyday experience demonstrates. Given this, can we build a more efficient AI based on brain design?
Although the architecture of the brain is very shallow, the ability of brain-inspired artificial neural networks to learn is superior to that of deep learning.
Traditionally, artificial intelligence derives from the dynamics of the human brain. However, brain learning is limited in many important aspects compared to deep learning (DL). First, an efficient DL wiring structure (architecture) consists of dozens of feedforward (successive) layers, whereas the dynamics of the brain consists of a few feedforward layers. Second, DL architectures typically consist of many successive filter layers essential for identifying one of the input classes. For example, if the input is a car, the first filter identifies wheels, the second identifies doors, the third identifies lights, and adding more filters will ensure that the input object is actually It turns out to be a car. Conversely, brain dynamics includes only one filter, which is located near the retina. The final required component is a mathematically complex DL training procedure. This is clearly far beyond biological realization.

A scheme of a simple neural network based on dendrites (left) and a complex artificial intelligence deep learning architecture (right).Credit: Professor Ido Kanter, Bar-Ilan University
Can a brain with its limited ability to perform precise mathematical operations compete with advanced artificial intelligence systems implemented on fast parallel computers? Our everyday experience suggests that for many tasks the answer is yes. Why is this, and given this affirmative answer, can we build a new type of efficient artificial intelligence inspired by the brain? Today (1 In an article published in the journal on March 30
“Highly pruned tree architectures represent a step towards a plausible biological realization of efficient dendritic tree learning by single or multiple neurons, reducing complexity and energy consumption, and reducing backpropagation. We realize the mechanism biologically, which is currently the core technology of AI,” he added. Yuval Meir, PhD student and his contributor to this research.
Efficient dendritic tree learning builds on previous work by Kanter and his experimental research team and was performed by Dr. Roni Vardi. This provides evidence for subdendritic adaptation using neuronal cultures, along with other anisotropic properties of neurons such as different spike waveforms, refractory.Period and maximum transmission speed.
Efficient implementation of highly pruned tree training requires new types of hardware, different from new GPUs, which are better suited to current DL strategies. Efficient mimicking of brain dynamics requires the emergence of new hardware.
See: “Training Tree Architectures Outperforms Convolutional Feedforward Networks,” 30 Jan 2023, Available here. scientific report.
DOI: 10.1038/s41598-023-27986-6