Our human interface with reality

Category Archives: Senses

Elon Musk thinks that accessing the brain directly will enable us to interact faster with AI. He is making a fundamental error.

Speaking about AI at the Code Conference 2016, he said: “Constrained by input output”, “If we can create a high-bandwidth neural interface with your digital self”, “Access directly to cortex”,  “How to establish a high bandwidth neural interface”

Unfortunately Elon Musk does not realise that the bottleneck is the brain’s ability to integrate new information. Our sensors already have greater input capacity than our biology based brains can process, so unless we replace the CPU (the brain), we will never be superhumans.

What do we really know about our human interface – Article in Medium

Elon Musk speaks:


There have been various claims from the providers of high capacity communication networks, of how high data rate we will need for Virtual Reality systems.

In Why the Internet Pipes Will Burst When Virtual Reality Takes Off, Bo Begole of Huawei Technologies suggests that “humans can process an equivalent of nearly 5.2 gigabits per second of sound and light”.

While this may seem high, Jim Crowe of Level 3 was Quoted by Joshua Shapiro in Wired magazine, Issue 6.11, Nov 1998, suggesting that we would require 16 Terabits to achieve the highest quality telepresence, based on a similar calculation.  A decade earlier  Eric Nussbaum at Bell labs suggested a figure of 1 Tbit/second (1000 Gbit/s) for a perfect TV display[1].

It is true that we derive huge numbers if we take the simplest calculation of the information rate required to produce a conventional image display of such fidelity that is indistinguishable from reality, as I describe here:

However the information rate of visual perception is vastly slower. Estimates put it around a mere 6 Mbits per second[2]. This much lower rate is a consequence of several limitations on our eyes resolution abilities: Firstly our eyes only provide high resolution over the central degree or so around our point of gaze. Secondly all our resolution capabilities compete, so for example we cannot resolve fine detail and fine grey-scales simultaneously.

So if we know where our point of gaze falls within the scene and can track it dynamically, we only need to transmit the information that our eyes can perceive. Some years ago I proposed using such an Eye Movement Synchronised Image Generation system. I performed experiments in which a small high resolution window tracked my eye-movements within a blurred window. When the window was full of text, the subject wearing the eye tracker could read without difficulty, while the majority of the screen was sufficiently blurred to make text completely unreadable.

Virtual Reality is a unique display requirement as the image is seen by only one individual, and hence can be tailored to that individuals moment by moment perceptual capabilities. All other displays are broadcast devices ( capable of being viewed by many) and so must provide for all possible locations of individual’s point of gaze

Whether such Eye Movement Synchronised Image Generation techniques become useful remains to be seen. Fortunately, very effective VR can be achieved with significantly lower resolution that required for perfect reality. Equally, optical fibre has vast information capacity at comparatively little cost.

Ref 1: “Integrated broadband networks & services of the future”, Eric Nussbaum, Paper MB2, Proc. OFC/IOOC 1987, Reno Nevada, Jan 19-23, 1987

Ref 2: Robert Ditchburn in The Oxford Companion to the Mind”, (1987), Edited by Richard L. Gregory. ISBN: 0198662246, (2004).