NEWARK WEATHER

Will There Be a New Government Agency for AI? – The American Spectator


Suppose a terrorist decides one day to aim a biological weapon at the world. Instead of hacking government resources, he turns to AI. Within hours, he’s developed literally thousands of models for potential biological weapons.

It’s a scenario that sounds like the premise for a science fiction novel — but it could be a reality in the not-so-distant future. (READ MORE: Artificial Intelligence Is Decaying the Internet)

Recent developments in artificial intelligence have left legislators scrambling to prevent worst-case scenarios from becoming realities in the near future. Programs like ChatGPT and Google’s Bard have multiplied rapidly, bringing with them a multitude of dangers from weapons development to scammers to the creation of deepfakes.

It’s the kind of innovation that requires the U.S. government to act quickly to develop flexible and effective regulations that favor the American people over big companies without stifling innovation — a tall order for an organization historically slow and bogged down by red tape.

The government’s solution to the problem is likely going to be the creation of yet another government agency, a proposal that became clear during a July 25 hearing held by the Senate Judiciary Subcommittee on Technology and the Law.

“I’ve come to the conclusion that we need some kind of regulatory agency, but not just a reactive body,” said Sen. Richard Blumenthal (D-Conn.), chairman of the committee. Blumenthal’s remarks come after Sen. Michael Bennet (D-Colo.) unveiled a revised bill to establish the Federal Digital Platform Commission, a new government agency, in May.

On Thursday, Sen. Elizabeth Warren (D-Mass.) and Sen. Lindsey Graham (R-S.C.) announced a bill that would create a new government agency with broad jurisdiction over both social media platforms and artificial intelligence. The agency would be able to sue platforms or even force them to stop operating.

“There’s no doubt that we’re going to have to have an agency,” Stuart Russell, a computer science professor at the University of California, Berkeley, testified during the hearing. “If things go as expected, AI is going to end up being responsible for the majority of economic output in the United States, so it cannot be the case that there is no overall regulatory agency for this kind of technology.”

Artificial Intelligence Is Beginning to Pose a Bigger Threat

Not so long ago, scientists and software developers thought it would be decades or perhaps centuries before they could develop artificial general intelligence (AGI) — a hypothetical version of AI that would operate at the level of human intelligence. Today, they think those kinds of computer programs could be five or 10 years away.

“Recently, I and many others have been surprised by the giant leap realized by systems like ChatGPT, to the point where it becomes difficult to discern whether one is interacting with a human or a machine,” Yoshua Bengio, a professor of computer science at the University of Montreal, told the Senate Judiciary Committee. (READ MORE: AI Will Not Destroy Humanity)

“These advancements have led many top AI researchers, including myself, to revise our estimates of when human-level intelligence could be achieved…. We now believe it could be within a few years or a decade. The shorter time frame, say five years, is really worrisome,” Bengio said.

However, AI does not need to attain human-level intelligence to become a dangerous weapon. Dario Amodei, the CEO of Anthropic, an AI research company, told the Senate Judiciary Committee that, currently, creating bioweapons requires highly specialized knowledge that is intentionally withheld from textbooks or platforms like Google. Recently, AI programs have begun to extrapolate some of that missing knowledge, albeit “incompletely and unreliably.”

In May, several researchers turned a drug-developing AI tool on its head by using it to invent 40,000 potentially lethal molecules in under six hours. The resulting paper, published in the journal Nature Machine Intelligence, demonstrated that these molecules could be used by “bad actors” to develop chemical weapons.

While weaponizing AI may seem like the stuff of science fiction, deepfake scams and spam ransom calls have begun using AI to manipulate ordinary Americans like Jennifer DeStefano, an Arizona mother who testified in June before another Senate committee. In January, DeStefano was the victim of a phone scam that used AI to replicate her 15-year-old daughter’s voice. Thus, it’s not a question of whether to come up with regulations for AI but what regulations should be implemented.

Crafting Good Legislation Is Just as Important as Crafting Legislation

Arguably, the United States is significantly behind its international peers in implementing legislation for AI. The European Union has already voted on a proposed law that is expected to go into effect by the end of this year. Meanwhile, U.S. legislators and the White House are still at the drawing board — which may not be a bad thing.

The EU’s draft law, known as the A.I. Act, would put restrictions on the use of facial recognition software and require creators of AI systems to “disclose more about the data used to create their programs,” according to the New York Times.

The U.S. Chamber of Commerce commented on the proposed legislation, suggesting it could “blunt” AI’s potential and limit innovation. “The U.S. Chamber remains skeptical about the EU’s ability to adopt a proportionate, flexible, and risk-based approach to AI regulation,” it said. (READ MORE: Are Robot Rights Next?)

The American process, while slower, seems to be attempting to take more elements of the puzzle into consideration. The solution being discussed in the Senate now, that of creating a new government agency, encompasses both the need for regulation and for researching countermeasures against the potential for rogue AI.

Bennett’s proposed Federal Digital Platform Commission would be limited — it could design new rules but wouldn’t have a licensing program. Blumenthal commented on the agency, remarking that “You can create 10 new agencies, but if you don’t give them the resources — and I’m not just talking about dollars, I’m talking about scientific expertise — [industry] will run circles around them.”

Witnesses at the hearing suggested securing AI supply chains for items like semiconductors and microchips, investing in testing and auditing practices, and requiring licenses and watermarks.

These solutions, of course, are still general and arguably are completely unenforceable. Take, for instance, microchips. While the U.S. government has been investing billions of dollars into microchip plants in places like Columbus, Ohio, and Syracuse, New York, the country still depends heavily on the Taiwanese microchip industry — which could become a problem if China invades Taiwan.

Testing programs for their potential to create harmful products or disseminate misinformation sounds like a great plan, as does auditing these companies; the problem is that no one really knows what to look for.

“As AI develops, we have to make sure we have safeguards in place that will ensure this new technology is actually good for the American people,” Sen. Josh Hawley (R-Mo.) said in his opening remarks. “I’m confident it will be good for the companies…. What I’m less confident of is that the American people will do all right.”





Read More: Will There Be a New Government Agency for AI? – The American Spectator