AI Geniuses think machines will be able to out perform human minds in every application and human beings really will become unnecessary.

In this week’s podcast, Walter Bradley Center director Robert J. Marks interviews futurist George Gilder on “How AI is gaming intelligence.” Their discussion stems from Gilder’s new book, Gaming AI: Why AI Can’t Think But Can Transform Jobs (free for download here).

From the transcript: (Show Notes, Resources, and a link to the complete transcript follow.)

Robert J. Marks (pictured): In general, do you see AI as a new demotion of the human race? This is pretty serious prose.

George Gilder: Well, it declares that the human mind is just a machine that can be simulated by computer algorithms … thus demoting the human endeavor from being the center of everything, to becoming a mere planet of a larger body.

And ever since then, science has been further demoting the earth to a fringe planet. And one of multiple parallel universes that are often assumed without any grounding or persuasive evidence. So this is machine learning and AI, artificial intelligence are believed to be headed for a the Singularity, as my friend Ray Kurzweil calls it, in which machines will be able to out perform human minds in every contingency and application and human beings really will become unnecessary.

Note: The evidence from science is that the universe is fine-tuned for life, Earth is in an especially life-friendly position, and there is no evidence for any universe other than our own.

Gilder and Marks went on to discuss how AI triumphs at games like Go created a belief that the problem-solving programs were really “thinking”:

George Gilder (pictured): And now to this day, people who say AI is going to usurp human brains… continue to cite the victory in Go, and the victory in chess. But games are identified by the fact that the symbol systems and the actual objects, the maps and the territories, are the same thing…

In Go, you have these two little stones and you move them across a board with hundreds of points on it. And the symbols and the objects are the same. So that if you can program the computer to conduct these Go games at billions of cycles a second, they can obviously outperform any human being. But that’s because there’s no difference between the symbol and the object.

But in the rest of the world where we live, we have symbols, we have mathematical languages, we have computer codes. We have a vast array of symbol systems, which allow us to interpret reality, but the symbols are never the same as the reality. They’re labels. They got to be applied by human minds, to reality.

Note: A candidate for an elected office, conducting a campaign, will surely understand the problem Gilder outlines. There is 1) the electoral district, the polling stats, and plenty of free fundraising advice from the Party. That’s the map.

And then there are 2) the thousands of individual human inhabitants of the district who are eligible to vote, who can be reached most effectively by knocking on their doors (the territory).

The map and the territory can diverge in unexpected ways, which helps to account for surprising Big Data polling failures when predicting election results.

Here are a couple of recent Mind Matters News stories that highlight Gilder’s key themes:

Why AI geniuses think they can create true thinking machines. Early on, it seemed like a string of unbroken successes … In Gaming AI, George Gilder recounts the dizzying achievements that stoked the ambition—and the hidden fatal flaw.

and

Why AI geniuses haven’t created true thinking machines. The problems have been hinting at themselves all along. Quantum computers play by the same rules as digital ones: Meaningful information still requires an interpreter (observer) to relate the map to the territory.

Originally published by mind matters

Leave a Reply