AI can beat human brain in chess, but not in memory, reveals study
Jan 18, 2021
Oslo [Norway], January 19 : A new research has shown that the brain strategy for storing memories is more efficient than that of Artificial intelligence (AI).
The new study, carried out by SISSA scientists in collaboration with Kavli Institute for Systems Neuroscience & Centre for Neural Computation, Trondheim, Norway, has been published in Physical Review Letters.
In the last decades, Artificial Intelligence has shown to be very good at achieving exceptional goals in several fields. Chess is one of them: in 1996, for the first time, the computer Deep Blue beat a human player, chess champion Garry Kasparov.
Neural networks, real or artificial, learn by tweaking the connections between neurons. Making them stronger or weaker, some neurons become more active, some less, until a pattern of activity emerges. This pattern is what we call "a memory". The AI strategy is to use complex long algorithms, which iteratively tune and optimize the connections.
The brain does it much simpler: each connection between neurons changes just based on how active the two neurons are at the same time. When compared to the AI algorithm, this had long been thought to permit the storage of fewer memories. But, in terms of memory capacity and retrieval, this wisdom is largely based on analysing networks assuming a fundamental simplification: that neurons can be considered as binary units.
The new research, however, shows otherwise: the fewer number of memories stored using the brain strategy depends on such an unrealistic assumption. When the simple strategy used by the brain to change the connections is combined with biologically plausible models for single neurons response, that strategy performs as well as, or even better, than AI algorithms. How could this be the case?
Paradoxically, the answer is in introducing errors: when memory is effectively retrieved this can be identical to the original input-to-be-memorized or correlated to it. The brain strategy leads to the retrieval of memories which are not identical to the original input, silencing the activity of those neurons that are only barely active in each pattern.
Those silenced neurons, indeed, do not play a crucial role in distinguishing among the different memories stored within the same network. By ignoring them, neural resources can be focused on those neurons that do matter in an input-to-be-memorized and enable a higher capacity.
Overall, this research highlights how biologically plausible self-organized learning procedures can be just as efficient as slow and neurally implausible training algorithms.