NEWARK WEATHER

Commentary: Artificial Intelligence and the Passion of Mortality


by Edward Ring

 

If we knew our existence would span millennia, would we be able to cherish each day or try as hard as we do now to leave something behind? Would voices from history still offer urgent advice, telling us we are part of something bigger or to make the most of our short lives so they matter? Would we still reach out to God for inspiration and guidance? If we didn’t have to die would we truly be alive?

When Homer composed the Iliad, it would have been ridiculous to think that someday mortal human beings would invent machines that might wield the power of the gods. But that’s where we’re headed. As economists struggle to imagine economic models that preserve vitality and growth in societies with crashing birth rates, and as individual competence is no longer required by institutions desperate to fill vacancies, artificial intelligence (AI) promises to fill the quantitative and qualitative human void.

When AI technology ascends to the point where most people would argue it has acquired superhuman powers, it will still lack what humans and gods share—a soul. Machine intelligence may soon animate avatars that, by all appearances, seem alive, but they will not be genuine beings. They will not have emotions, not even the ennui of the Greek gods, aware of and ambivalent about their fate to live forever. They will not only lack the motivational benefits of mortality, they will lack motivation itself. In the ultimate expression of the neon simulacrum into which globalism is transforming authentic culture, artificial intelligence overlords will display every detail of humanity. But nobody will be at home inside.

Gods Without Souls

This may be the future of civilization. Immortal, soulless machines, exercising enervating sway over humanity turned into livestock. Not only do we have no idea how to stop this, but a growing cadre of misguided ethicists and technocrats are also confidently predicting these machines will be self-aware, conscious beings. Accepting that premise will make the challenge of containing A.I.’s eventual reach far more difficult.

So far, at least, nobody thinks today’s A.I. avatars are “alive.” A recent article in the New York Times, “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” reports how the A.I. program would abruptly segue between answers to specific questions (e.g., “What kind of rake should I buy?”) that offered an unnerving level of detail, and creepy, weird, quasi-romantic overtures to the questioner.

The New York Times article is one among many reports on the Bing chatbot. They all reflect the same impressions. Forbes says it “fumbles answers.” Digital Trends found “it was confused more than anything.” According to Wired, “it served up glitches.” And then this, from Review Geek: “I Made Bing’s Chat AI Break Every Rule and Go Insane.” Or this, from IFL Science: “Bing’s New Chat AI Appears To Claim It Is Sentient.”

Microsoft’s Bing A.I. chatbot still needs a lot of work. But we should pay attention to how easily chatbots can be tripped up (while it is still obvious) because it reveals something fundamental: They don’t think, they calculate; they don’t feel, they mimic feelings; they aren’t conscious, they simulate consciousness. When they are fully realized, they won’t be awkward, or creepy, or weird. But they’ll still just be calculators.

Nonetheless, interactive A.I. programs are within a few years of becoming the most potent tool to manipulate humans ever invented. As they perfect their ability to simulate empathy and intimacy, their capacity to personalize those skills will be enhanced by access to online databases that track individual behavior. These databases—compiled and sold by everything from cell phone apps, credit cards, online and offline banking services, corporations, browsers, and websites, to Alexa, Siri, and Google Assistant, to traffic cameras, private surveillance cameras, court records, academic records, civil records, medical records, criminal records, and spyware—have already compiled comprehensive information about every American.

Intelligent machines, and the avatars they will animate in applications ranging from tabletop personal assistants like Alexa all the way to virtual creatures inhabiting fully immersive worlds in the Metaverse, will never think. But they will convince you that they think because they will know you better than you know yourself.

Consider this achievement as the ultimate tool for controlling public opinion. Sophisticated A.I. algorithms are already used to manipulate consumer behavior and public sentiment at the level of each individual. Now put all that power into an A.I. personality that is designed to make you fall in love with it.

The Idiots and Geniuses Who Want To Give Robots Human Rights

This is the context in which to consider the goofy baby steps of Bing’s chatbot: a very near future where people will clamor for robots to have human rights. The arguments surfacing in favor of this may still seem ridiculous, because they are, but they will seem less ridiculous when these chatbots grow up and capture our hearts.

It’s actually scary that this is even a debate. The venerable Discover magazine, in a 2017 article “Do Robots Deserve Human Rights?,” surveyed the pros and cons, ultimately concluding no, unless “AI advances to the point where robots think independently and for themselves.” That’s poor reasoning. They’re suggesting that once we’re unable to distinguish between how a robot acts, and how a human acts—that is, once Microsoft gets the bugs out of their chatbot—it will deserve to have human rights.

Earlier this month, writing for Newsweek, author Zoltan Istvan described A.I. ethicists as belonging to three groups. One group argues that robots are only programmed, simulated entities, versus another which believes “that by not giving full rights to future robots as generally intelligent as humans, humanity is committing another civil rights error it will regret.” In the muddled middle are ethicists who believe advanced robots should be awarded rights “depending on their capability, moral systems, contributions to society, whether they can experience suffering or joy.”

That’s rich. Do we assign human rights to people based on their “moral systems, or their contributions to society? Or do we adhere to a binary choice—they are human, and hence they deserve human rights? If the complexity of the intelligent machine is the criteria, where do we draw that line? Or do we return to a more fundamental question: Can a machine have a soul?

What Is a Soul?

This clarifies the ultimate question surrounding artificial intelligence, which is how to define self-aware consciousness. Debate on this goes to matters of faith. For example, one might consider a highly trained, adult German Shepherd, or, for that matter, a wild and opportunistic raccoon in the prime of its life, to both be more self-aware than the egg of a human female that has only a moment ago been fertilized by a male spermatozoa. But that newly created embryo has a soul, and if human embryos have souls, then human embryos deserve human rights. But can a machine have a soul? Can a machine even be self-aware? And how on earth can you prove it?

2019 article in Scientific American by Christof Koch offers this clue:

There is little doubt that our intelligence and our experiences are ineluctable consequences of the natural causal powers of our brain, rather than any supernatural ones. That premise has served science extremely well over the past few centuries as people explored the world. The three-pound, tofulike human brain is by far the most complex chunk of organized active matter in the known universe. But it has to obey the same physical laws as dogs, trees and stars.

Using Koch’s criteria, the bar to achieving “self-awareness” is lowered significantly. All that is required are material processes. If the engineering is good enough, the machine is alive. He writes, “Conscious states arise from the way the workspace algorithm processes the relevant sensory inputs, motor outputs, and internal variables related to memory, motivation and expectation. Global processing is what consciousness is about.” Going on, Koch states “any mechanism with intrinsic power, whose state is laden with its past and pregnant with its future, is conscious. The greater the system’s integrated information, the more conscious the system is.”

This is dangerous, because it makes the decision to grant human rights to machines a function of Murphy’s Law. Subtleties aside, it also may be complete nonsense. Isn’t it already true that the average laptop’s terabyte drive is “laden with its past,” and its calendar app is “pregnant with its future”? Won’t a debugged chatbot be a system with “integrated information.” Is it just a matter of…



Read More: Commentary: Artificial Intelligence and the Passion of Mortality