NEWARK WEATHER

AI Will Not Destroy Humanity – The American Spectator


Despite what some “experts” are saying, artificial intelligence (AI) will not destroy humanity — at least not more so than any other powerful technology with the ability to create havoc if used by people with malicious intent.

Airplanes, for instance, have great benefits but changed the course of modern warfare. Rockets have great benefits but can carry warheads. Nuclear energy has great benefits but could destroy the world if misused. The laser is one of the greatest inventions in the history of mankind and is used in a multitude of beneficial devices, including surgical instruments, disk drives, computer chip manufacturing equipment, cool light shows, and even for removing toenail fungus — and, of course, it is used in powerful weapons.

Every new technology has had its detractors that warn about the apocalypse, but the warnings about AI seem more sudden and widespread. Perhaps it’s the name “artificial intelligence” that makes people think that we’ve created thinking machines. Maybe it’s all the sci-fi books and movies that warn us about machines taking over without the oversight of an off switch.

The reality is that, today, AI is an incredible tool that will help a lot of people and hurt a lot of people. These programs, however, are very far from having the ability to think — and even if they did, why would they want to destroy the planet?

Primitive Versions of AI Mimicked Humans

AI has changed drastically since its origins in the 1950s. The first programs mimicked human interactions. Mathematician Alan Turing described a simple test: A person communicates with another person and a machine through a messaging system; if the person can’t distinguish the computer from the human, then artificial intelligence has been achieved. Turing’s test was designed at a time when computers performed mathematical calculations but did little else, and it was arguably passed by computers as long ago as the 1960s. Today, very few people, and no computer scientists, believe the test is definitive.

One of the first popular AI programs, “ELIZA,” was designed in 1966 to act like a psychologist. Developed by Joseph Weizenbaum, a computer scientist at MIT, the program poked fun at both the entire field of AI and the concept of Rogerian psychology, where the therapist simply repeats the patient’s words back to the patient in the form of a question. Most didn’t recognize Weizenbaum’s humorous intent, and “ELIZA” was considered by many computer scientists to be a breakthrough. (READ MORE: Will California Outsmart AI?)

In 1972, Kenneth Colby, a computer scientist at Stanford, created “PARRY,” a computer simulation of a paranoid personality. A person with paranoia was easier to simulate — when the AI didn’t understand the conversation, it could always revert to its paranoid delusions: “Wait, did you hear something?”

My favorite AI spectacle was in 1973, when the famed computer scientist, Vint Cerf, connected ELIZA to PARRY and held a weird and rather funny AI therapy session.

After years of trying to simulate human thought with very little progress, computer scientists changed their focus to expert systems.

These were programs trained to make decisions with a set of “if-then-else” statements, like a real person. If scientists wanted to create an AI chef, a human chef would be asked a huge number of questions, which would then be converted into a computer program: “If you’re going to bake a cake, would you use flour? Would you use eggs? Would you use milk? If not, would you use water? Which steps would you perform in which order? What temperature would you cook for? For how long?”

The systems weren’t very threatening to anyone. Nor were they very useful, since they could only answer the specific questions they were programmed to answer, and, as new information appeared (if, for example, a new artificial sweetener was developed), the expert system couldn’t adapt without being reprogrammed.

But that changed when computer scientists developed machine learning. This new basis for AI relies on very powerful computers (previously unavailable) that can search giant databases of world knowledge (also previously unavailable) to find patterns. It’s a very powerful tool. But it cannot think. It cannot create. It will not take over the world or destroy it.

But it will amplify an existing problem.

The Problem of Misinformation and AI

Modern tech tools have put the world’s knowledge at our fingertips. Nevertheless, we still get so much wrong. Most people don’t think critically — instead, they accept the “facts” that conform to their worldview, or they seek out the “facts” from “experts” who share their worldview. It’s much easier and more comforting to accept these “facts” rather than challenge them by digging deeper, even when such digging only requires a search engine or flipping the channel on the TV.

Amplifying the problem further, powerful Big Tech companies, often at the insistence of our government, censor “misinformation” — which only amplifies rumors, myths, and pure propaganda. Minority opinions, some of which are absolutely correct, get minimized, if not erased altogether.

AI answers questions based on majority opinions, which we know can be and have often been wrong, as demonstrated throughout history: The world is not flat, witches don’t float, Leonardo da Vinci didn’t invent the helicopter, and COVID didn’t originate in a food market. (RELATED: ChatGPT, Helplessness, and the Future of the Human Race)

Furthermore, AI has no understanding; it simply recombines and reinforces “common knowledge,” even when that knowledge is wrong. I recently asked Microsoft Bing’s AI function to create an image of Charlie Brown playing poker for my blog. The program searched for images of Charlie Brown and poker and combined them. The result was hideous. Charlie Brown had three eyes or one eye, no nose and two mouths. The program only understood Charlie Brown as some image, not as an image that represented a person.

Similarly, I asked Google AI for a summary of my latest novel Animal Lab. What it returned was a long description of a fascinating story — only it wasn’t my story. And in a recent personal injury lawsuit, a lawyer decided to have ChatGPT write a declaration to be submitted to the court. Unfortunately, ChatGPT decided to cite fake cases, and the judge is now threatening to sanction the lawyer.

The worst effect of AI is that it causes a misinformation avalanche effect. These programs recombine and reinforce misinformation from the internet, creating more copies of that misinformation, which then goes out into the world and onto the Internet. Each subsequent search for knowledge, whether made by AI or by human, finds more references to the misinformation, making it appear even more correct and thus creating a higher chance of it being used in a subsequent answer to a question.

Learn to Question Assumptions

The solution is not to put a pause on AI, which some prominent figures have suggested — that would be ridiculous. Stopping the development of any technology only means that those with evil intentions get a big head start. The solution is to better educate people on how to separate facts from opinions from outright falsehoods. People must learn to ask critical questions about what AI produces and, more importantly, what their expert human sources produce. (READ MORE: If AI ‘Wants’ to Destroy Us, It Can. But Why Would It?)

Furthermore, we must eliminate censorship of all kinds. All facts must be available — whether they be false, misunderstood, or true but labeled as false — so that they can all be compared for veracity and debated vigorously. It is important for people to read articles and explore opinions that differ greatly from their own beliefs. It’s important for people to challenge their own beliefs and find holes in their explanations.

If you can’t find problems with your own beliefs, then you’re not looking hard enough. Every belief system has assumptions that could be wrong. Accept that your assumptions are wrong and see where that leads you, even if you end up in a place you don’t want to be. You may change your mind, but, if not, you’ll at least understand those who disagree with you.

Will people take up critical thinking? Will the government stop its censorship of “misinformation”? Will Big Tech stop imposing a specific ideology on its users? These are the issues that could destroy civilization — not AI.

Bob Zeidman is the creator of the field of software forensics and the founder of several successful high-tech Silicon Valley firms, including Zeidman Consulting and Software Analysis and Forensic Engineering. His latest venture is Good Beat Poker, a new way to play and watch poker online. He is the author of textbooks on engineering and intellectual property, as well as award-winning…



Read More: AI Will Not Destroy Humanity – The American Spectator