As I describe in the final chapter of Bottleneck, it appears that the speed of our response to novel information is limited by the size of our world view. Whatever we sense must be integrated with our idea of the world out there, and the bigger and more complex and detailed that idea, the slower we learn and the longer delay in our response. (The smart chimps at Tokyo University are evidence for this; they outperform humans in rapid learning tests, suggesting that they have a wider learning “bottleneck” and faster responses to unique situations.)
This narrower learning bottleneck is only a disadvantage when we encounter something that we cannot in any way predict. A simple dragonfly can catch a fly “on the fly”. Though we may be unable to catch a fly in our hand, we can design and build a fly-trap or invent an insecticide. Most things that we experience are similar to things we have experienced before, so we generally benefit from our more extensive world view. We have a greater ability to predict a suitable response based on limited sensed information.
What has this got to do with Artificial Intelligence? Our world view incorporates our ethics, morals and ideas of how we should treat our fellow man. Similarly any robot/AI can only have a human-like ethical and civilised sense if its world view is of similar complexity and detail. But there is a very fundamental difference between us and them.
A child’s ethical sense grows from infancy. When very small it is uniquely selfish, no idea of others as people with similar needs and feelings. An adult with this world view would be considered psychopathic, but as a baby is almost powerless without independent access to guns and cars, it represents no threat to society. Our human capability for good or harm grows together with the scale of our world view. We are safe from the hedonistic behaviours of our infants (provided we don’t leave our loaded hand-gun within their reach).
In contrast, our robots begin their robot lives with their physical capabilities fully formed and fully available from the moment they are switched on. The extent of their world view has been predetermined by us. Generally we only provide our Artificial Intelligent robot with sufficient world view to perform the tasks for which we design them. My automated washing machine presents no threat as it has very limited physical capabilities, and while my robot vacuum cleaner may be quite adventurous around my home, there is negligible risk that it might go rogue and clean me out of food or money.
Today we have taken huge strides in the development of robotic hardware, and in the software that enables them to function physically, to balance to walk to fly (consider what cheap drones can now achieve), but the software that might match this capability with anything like consciousness is way behind. Our autonomous military drones know nothing of world politics, the behaviour of different ethnic groups, or what physical emotional pain feels like in a human body.
So we must tread carefully as we enter this era. A mismatch between power and scale of world view also occur in human groups, whether they be the mafia, bankers, traders or armed religious fundamentalists, and we all know how much trouble they have got us into.