NEWARK WEATHER

If AI ‘Wants’ to Destroy Us, It Can. But Why Would It? – The American Spectator


Let me summarize first.

  • I love artificial intelligence.
  • I love watching it become more powerful and useful to us.
  • And I think it may essentially wipe out humanity.
  • But if so, there’s really not much we can do about it.

Bummer, eh?

Certainly nothing will come of the open letter released last week and signed by such tech leaders as Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, and Pinterest co-founder Evan Sharp. It calls for a six-month moratorium on AI development, saying it could cause “loss of control of our civilization.” A day later, this was followed by a Time magazine essay declaring that six months “isn’t enough”: With but a few exceptions, “shut it down” now and forever. The author is Eliezer Yudkowsky, “a decision theorist from the U.S. … [who] leads research at the Machine Intelligence Research Institute.” Wikipedia says that he’s known for “coining the term friendly artificial intelligence.” 

While I hardly poo-poo the concerns of the letter signers or Yudkowsky, I will say that it’s utterly quixotic. Those windmills will keep turning, and no lance can even slow them.

What is GAI?

I fell in love with generative AI, or GAI, back in 2016 when Google Translate went from essentially a joke to a very handy tool by using a subset of AI called “deep learning,” or, more specifically, a GNMT neural translation model. Heretofore, it was essentially a wiki, with users imputing fixes that often made it worse. Suddenly it knew all three of my foreign languages better than I did. Sure, it needed help. Language is quite nuanced. Nevertheless, it certainly speeds up my reading and writing.

Underlying this joy in pointing out artificial stupidity is fear.

Most of the world was introduced to AI through photo-fixing tools — which are now routinely used on social media and on dating sites, able to turn monsters into maidens — though the actual term wasn’t used much. In recent months, it’s become referenced so much that it’s almost the equivalent of water being labeled “gluten-free.”

That stems from GAI, which just a few months ago exploded onto the scene courtesy of OpenAI. Now, as we all know, it can generate beautiful images, including photo-realistic ones (and, yes, nude ones, but look up the sites for yourself). It can generate essays, short stories, poems, term papers, articles, and essentially anything written. Not as well as the best of us, but better than most of us. And even with the best of us, it can do so far faster. As they say, quantity has a quality all its own. Oh, and you can have a virtual girlfriend for a few bucks a month. Now that’s “friendly artificial intelligence.” Which frankly seems pathetic.

On a brighter note, I have used it many times to write short stories for my Colombian friend, in which she is a penguin, and her children are raccoons. They are truly fun, although, yes, with my creative parameters. (She said one brought her to tears and, being Latina, sent me a photo of her in tears.) I use art generators to show the mother penguin interacting with her raccoon children. I can’t see ever buying children’s stories again unless it’s the classics, given the ability to tailor your own.

You really can’t be sure that this article isn’t GAI because you can tell GAI to write an article in the style of Michael Fumento. I have 35 years of articles to draw from. It can do a very good impression of me. Efforts to build AI detectors have not been particularly successful.

Yes, there’s been a lot of hole poking, with people delighting in flubs. However, GAI’s skill has rapidly increased in just the last few weeks. Unless you are an expert in a field, it knows more than you do, and if you are an expert, it still probably knows stuff you don’t. Six years ago, it beat the world’s top Go champion. You probably don’t know the first move.

But underlying this joy in pointing out artificial stupidity is fear. Fear of how AI is already transforming society and, more than that, fear of the future of AI. Often wrongly portrayed as happening decades from now, “Judgment Day,” if there be one, is probably far closer than you think.

AI’s Great, Rapidly Expanding Capabilities

Consider that already AI is writing code, and already it’s designing hardware. At some point, it will be able to completely write its own code and completely design and fabricate its own hardware, which may be radically different from anything today. Say, DNA. As soon as it can do that, its abilities won’t just increase exponentially — as they are now — but rather explode. Intellectually we will be as ants are to humans. Whatever the timeline, it will happen.

Writes Yudkowsky:

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

Actually, I would expect pockets of humans to survive, albeit perhaps living in a primitive state. Like our little friends the cockroaches, we can be resilient.

Yudkowsky says, “There’s no proposed plan for how we could do any such thing and survive.” True.

He also says, “None of this danger depends on whether or not AIs are or can be conscious.” Right again. We can’t really even define consciousness. Or sentience. “General AI,” perhaps definable as extreme multitasking akin to what animals can do, is actually what we are striving for. It’s close enough that OpenAI co-founder Sam Altman felt it necessary before the release of the current iteration, GPT-4, to say that it would not be GAI and that those expecting such would be “disappointed.” (It’s too soon for an Altman reaction to the moratorium letter, but his oft-repeated fears regard a possible dystopian future as opposed to essentially no future. His psychological — not so much financial — investing in GAI will not permit him to call for a shutdown. He will not be Star Trek’s Dr. Richard Daystrom, who bemoaned his computer, which he designed to save Starfleet lives by replacing crew only to discover that it was attacking other ships and killing whole crews.)

We just need an AI that is sufficiently smarter than we are in a sufficient number of ways and “decides” to attack us.

Further, Yudkowsky writes that “[p]rogress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems.” True again.

“AI will soon become impossible for humans to comprehend” is the telling title of a detailed article published in the Conversation last week. In very brief, the article explains that “neural networks” are called such because “their form and ‘structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.’”

“[U]nlike the logic circuits employed in a traditional software program,” it continues, “there is no way of tracking [the] process to identify exactly why a computer comes up with a particular answer,” and “[t]he multiple layering” (whence the name “deep learning”) “is a good part of the reason for this.” It is likely that the more important AI becomes to us, the less we will understand it.

Potential Future Scenarios

This has been bleak so far. But now the sky gets a bit brighter as we ask: Given that it will have the ability, why would AI want to kill us off?

One idea often proffered is that it will see us as competition for resources. Nonsense.

Drastic actions would be required in every country that would tremendously restrict rights.

AI just needs power. It will be able to convert matter to energy. It doesn’t need our power. In a nice illustration of the wokeness that infests current AI chat generators, one told me that the threat of global warming might set it off. (Incidentally, I could not get it to make an argument that there is no man-made warming — something any decent lawyer could do because we’re trained to argue for whatever side pays us.) In order to preserve the planet, goes the argument, it would have to eliminate humanity. But even the most die-hard of global warming enthusiasts don’t claim that the planet will be destroyed, and they actually use the preservation of humanity as a reason to stop alleged anthropomorphic warming.

Still, sci fi has always taught us a lot about the future.

In the Seth MacFarlane series The Orville, “Kaylon” humanoid robots revolt and kill their…



Read More: If AI ‘Wants’ to Destroy Us, It Can. But Why Would It? – The American Spectator