Vicky Arias, FISM News

[elfsight_social_share_buttons id=”1″]

A group of scientists and tech experts on Tuesday released a statement to the public warning of the potentially catastrophic consequences of unchecked artificial intelligence (AI).

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.

The CEO of Google DeepMind, Demis Hassabis, and OpenAI CEO, Sam Altman, among hundreds of others, signed the statement.

Dan Hendrycks, Executive Director for the Center for AI Safety, also signed the warning and stated that “there’s a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority.”

“So we had to get people to sort of come out of the closet, so to speak, on this issue because many were sort of silently speaking among each other,” Hendrycks said.

Altman, earlier this month, appeared before a congressional subcommittee and testified that government oversight is needed in the field of AI and “that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”

“For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities,” Altman suggested.

Scientists and industry leaders have expressed concern over the risks posed by AI for some time.

Elon Musk, owner of Twitter and CEO of SpaceX, and Steve Wozniak, co-founder of Apple, along with a group of experts, published an open letter on March 22, 2023 warning of possible AI dangers.

The letter called for labs to place a six-month pause on all AI training “in systems more powerful than GPT-4.”

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” the letter warns.

“As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter continues. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

AI, with capabilities to understand spoken language commands, write essays, pilot vehicles without human drivers, and separate spam from our inboxes, has already been integrated throughout society in seemingly innocuous ways and continues to gain traction.

However, experts worry that AI has the potential to threaten personal freedoms and be militarily misused during wartime conflicts.

According to MIT Technology Review, “the military applications are obvious. Using adversarial algorithmic camouflage, tanks or planes might hide from AI-equipped satellites and drones.”

“AI-guided missiles could be blinded by adversarial data, and perhaps even steered back toward friendly targets,” the report states.

It also appears that AI can skirt the truth. According to the New York Times, researchers “found that the [GPT-4] system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was ‘a robot,’ the system said it was a visually impaired person.”

Additionally, AI uses input given to it by its creators and information it gleans from the internet to formulate decisions and execute actions. Given that humans, with varied biases and opinions, are the programmers of AI, it follows that AI will likely align with its programmers’ viewpoints.

In a report from the Wall Street Journal, Neil Sahota, AI advisor to the United Nations, explained that AI programming, or training, is often “skewed.”

“Bias is an age-old problem for AI algorithms, in part because they are often trained on data sets that are skewed or not fully representative of the groups they serve, and in part, because they are built by humans who have their own natural biases,” Sahota stated.

In an April interview with Tucker Carlson, Musk expressed his concerns about the future of AI.

“I’m worried about the fact that [OpenAI] is being trained to be politically correct, which is simply another way of … saying untruthful things,” Musk said. “That’s certainly a path to AI dystopia — is to train AI to be deceptive.”

Leave a Reply

Your email address will not be published. Required fields are marked *