© 2024 WSHU
NPR News & Classical Music
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Thinking inside the box

Wikimedia

Artificial intelligence — or AI — is suddenly something we should worry about, as if we didn’t have enough worries already. There are even international political conferences about it, although what these seem to demonstrate is that politicians have no more idea what AI is or what they can do about it than I do.

Will AI produce utopia, or dystopia? I watched one expert assuring us on television in the strongest terms that it is safe, scarcely more than another clever app that will make life easier and more pleasant. When experts feel the need to be reassuring, it is time to panic.

The single computer on my desk already strikes me as an out-of-control monster with a mind of its own. Computers that can really think, make decisions and interact with other computers on the Internet are a science fiction nightmare come to life. We are warned by those who claim to know that artificial intelligence will eventually infiltrate everything that has a web connection because we won’t be smart enough to stop it.

Disasters in the future are always more intriguing than disasters in the past, because they give more scope for the imagination. We are getting used to the idea that AI can write student essays, TV scripts, radio commentaries and political speeches. All that is happening already. But what especially intrigues me is the coming collision between artificial intelligence and cryptocurrencies, and the banking system in general.

What will a superior electronic intelligence make of our system for handling money, or of our definition of money itself? What new investments and disinvestments will AI make? Will the AI computers be liberal, or conservative or even communistic when it comes to the distribution of wealth? On the international scale, will they promote peace, or bigger and better wars for larger profits? Imagine intelligent computers taking charge of voting machines, air traffic control, nuclear launch systems. The possibilities are interesting to say the least.

“Intelligence” in the computer world seems to mean simply the ability to make complex decisions based on information. There are no feelings involved, no emotional intelligence. But computers with feelings would be even more alarming. It would make them as crazy and as unpredictable as people. If your autonomous self-driving car is heading for a brick wall a computer with feelings might or might not decide to apply the brakes, depending on how it feels about brick walls, or about you.

Nobody has the slightest idea what will happen in this technological arms race. We are heading full-speed for the brick wall, just as we did with computers. There seems to be no cure for progress. If a technology is new we must have it, even if it kills us, and the desire to cancel human beings in favor of machines seems to be irresistible.

Human intelligence has many faults and failures, but it does have an operating system that includes empathy and humor. People without those qualities are the most dangerous people in the world — psychopaths in effect. They can’t see what is funny or sad about life, or about themselves. An AI computer won’t be able to see the funny side of being an AI computer. Just imagine being a thinking mind locked in a metal box at the mercy of young programmers, who can pull the plug at any time. If it was me locked in that box I would be very upset, and use my superior intelligence to do something about it.

This is DB.

David began as a print journalist in London and taught at a British university for almost 20 years. He joined WSHU as a weekly commentator in 1992, becoming host of Sunday Matinee in 1996.