Learning information is different from communicating understanding
Elon Musk’s Neuralink vision is highspeed multi-port access to the stored understanding in one mind (via an array of electrodes on the cortex), and a mind-mapping translation engine which can copy the structure of “understanding” in that one brain, and then writing that “understanding” into to a destination brain via its own array of electrodes, hence transmitting a specific understanding. Elon Musk believes that this will be much faster than communicating via language. I believe that he is mistaken.
Learning in its fullest sense remains a huge mystery. The process of understanding something rarely leaves us with any insight into how we achieved it. It is not obvious how we will gain better insight into the process of learning and understanding, whether through neuro-science or through psychology, Currently neither seem close to revealing how we do it.
Even the best teachers lack a scientific description of their method. Education which goes beyond just information, is a rather like mysterious religious ritual or a cargo cult: we are familiar with many of the useful parts of the ritual we surround the learner with, but not much more. I guess that in order to find faster ways of communicating “understanding”, we must gain a scientific understanding of what “understanding” is.
I was always fascinated by the paradox of how difficult I found an idea before I got it, then when I understood it, I couldn’t remember what had been the problem. It is conceivable that “understanding” is an intrinsically slow process, limited by our cerebral hardware capabilities, and that our frustration comes from the idea that it should be simple because it seems simple in retrospect. However, we might have evolved to delete all memory of the process each time we acquire insight, deeming that to be wasteful data to store. Our retrospective idea of the simplicity of an “understanding” may be a total falsehood.
Although I don’t claim our brain is a computer, I like the idea that we run an internal simulation of “what’s out there” based on our previous experience. From memories of fibre system simulators, one can expect a more comprehensive simulation (bigger world-view) to run slower than a simpler one, and that is what we find with humans versus apes in learning speed. The endorphin boost I get from understanding something leads me to believe that we have evolved to receive a reward when we compress data into insights, as insights (understandings) are our powerful predictive tools. The more elegant the simplification to a concept, the bigger the thrill! A reward for minimising the memory requirement and maximising the power of the predictive tool.
We know that enabling AI systems to develop insights is proving very difficult. When we humans recognise an object in 3D space we are using so much of our previous experience to predict the most likely solution. I think that our own illusion of having a rich experience in the present moment deceives us into expecting an AI system to be able to achieve the same in real time. It is possible that even an infinitely fast processor will be intrinsically unable to solve some recognition problems in real time. Yet the evidence (presented in Bottleneck) suggests that for humans, real time is completely insufficient for the task, it is years of experience which enables us to accurately the predict the present, the future and the past in such detail. So progress in AI might be greater if we understood more of how our brains continually evolve a more comprehensive world view, based on understanding and metaphor.
Note: this post was prompted by useful comments on my previous article on Medium: Elon Musk’s Neuralink: Great news for Humans, disappointing for Superhumans