Denyse O’Leary thought that these assumptions — particularly the final two — need further clarification. Particularly to describe the alternative assumptions that apply to human minds, which are neither mechanical nor deterministic. If my readers wish to pursue further complexities in the physics itself, you can explore “quantum entanglement theory” (as in my daughter's Age of Entanglement [Knopf]), whereby the locality assumption is denied in physics itself). In order to assure correspondence between logical systems and real-world causes and effects, engineers have to interpret the symbols rigorously and control them punctiliously and continuously. You need real minds involved. Computers “learning” from big data do not suffice. As I discuss in Gaming AI, the autonomous automobile is a good test case of the limits of AI. Honda has just announced that it has accomplished third degree autonomy and will be launching self-driving cars early next year. But there is a catch. Prototype code that could change your life? The Fine Print Honda, for example, wants the drivers to remain alert for emergencies. Having the drivers on constant alert nullifies most of the benefit of self-driving cars. Do you really want to sit with your hands hovering over the steering wheel and foot over the brake? My view is that self-driving is a hardware problem. It requires sensors that can outperform humans in seeing objects far ahead on the road and off it by using frequency bands beyond the small span of visible light. The car’s “eyes” must be able to compensate for the rearview foibles of the AI map. It cannot assume that the cumulative database from the past — its deterministic rearview mirror world — will hold in the future. Your self-driving car must navigate a world that everywhere diverges from its maps, that undergoes combinatorial explosions of novelty, black swans fluttering up and butterfly effects flapping, that incurs narrowly local weather events, that presents a phantasmagoria of tumbling tumbleweeds, plastic bags inflated by wind, inebriated human drivers, pot headed pedestrians and other high-entropy surprises. Self-driving vehicles assume the congruence of digital maps with digital territories. But to achieve real congruence, you have either to change the cars or change the territories. Most existing self-driving projects rely on changing the territories. The Chinese, who lead the field, are building entire new urban architectures to accommodate the cars, which in turn become new virtual railroads. This goal differs radically from the idea of the singularitarians foreseeing vehicles independent of human guidance or control. The map is a low entropy carrier. The world is a flurry of high entropy and noisy messages, with its relevant information gauged by its degree of surprisal. To deal with the real world, self-driving cars need to throw away the AI assumptions and learn to see. Today’s Prophecy: Computers cannot truly see. To define the “connectome” of a single human brain — all its dendrites, synapses, and other links — entails more than a zettabyte of information. Ten to the 21st power, a zettabyte comprises close to all the memory attached to the entire global internet. Meanwhile the energy use of the human zetta brain differs so radically from the energy consumption of a computer or datacenter as to signify completely different principles of operation. While the internet and its silicon devices consume gigawatts, a human brain is made of carbon and works on 12 to 14 watts. That means a human brain is on the order of a billion times more energy efficient than a computer brain. Real artificial intelligence will require going beyond mere silicon and binary logic into a new carbon substrate for intelligent machines. Regards, George Gilder Editor, Gilder's Daily Prophecy |
No comments:
Post a Comment