Artificial IntelligenceNews/PR

36% of researchers fear the possibility of nuclear-level AI catastrophe – Stanford study

2 Mins read
AI systems

According to Atlas VPN’s data, over 33% of trustworthy AI specialists predict that artificial intelligence will result in a disaster on a nuclear scale by the end of this century. This information was presented in Stanford University’s Artificial Intelligence Index Report for 2023, which was made public in April 2023.

The report was compiled after a group of researchers in the United States surveyed experts in natural language processing (NLP) from May to June 2022 on various issues such as the status of artificial general intelligence (AGI), ethics, and NLP fields.

NLP is a subfield of artificial intelligence that focuses on enabling computers to understand spoken and written language in a way that mimics human comprehension. The survey was conducted among 480 participants, out of which 68% had authored a minimum of two papers for the Association for Computational Linguistics (ACL) between 2019 and 2022.

This poll provides one of the most comprehensive views on the opinions of AI experts regarding the development of AI.

artificial intelligence risks

According to the survey, over one-third (36%) of the participants either strongly agreed or somewhat agreed with the statement that “AI or machine learning systems’ decisions could lead to a catastrophic event in this century that is at least as devastating as an all-out nuclear war.”

Despite these apprehensions, only 41% of NLP researchers believed that AI should be regulated. However, a majority of AI experts (73%) agreed that artificial intelligence could soon bring about groundbreaking changes in society.

Recently, during an interview with Brook Silva-Braga from CBS News, Geoffrey Hinton, regarded as the “father of artificial intelligence,” stated that the potential implications of the rapidly evolving technology are comparable to significant historical advancements like the Industrial Revolution, electricity, or the wheel.

When asked about the likelihood of AI causing the extinction of the human race, Hinton cautioned that “it’s not impossible.”

Moratorium for advanced AI systems

In a blog post for OpenAI in February, CEO Sam Altman highlighted the potential consequences of a misaligned superintelligent AGI, stating that “the risks could be extraordinary” and could cause significant harm to the world.

According to a recent article in The Financial Times, Elon Musk, CEO of Tesla and Twitter, who signed the letter calling for a pause in AGI development, is reportedly developing plans to establish a new AI startup to rival OpenAI.

The Stanford study also revealed that 77% of AI experts either strongly agreed or somewhat agreed with the assertion that private AI firms possess excessive influence.

Stanford’s research provides an interesting glimpse into the AI industry’s collective mindset, which appears to be somewhat ambiguous about the technology’s trajectory.

It is still unclear whether AI will lead to transformative changes that significantly enhance humanity’s well-being or result in a negative outcome overall.

However, one thing is clear – the advancement of sophisticated AI systems will result in significant shifts in society within this century, so we must prepare ourselves accordingly.

Know more here.

Read next: Cloud and AI drive Microsoft Q3 earnings as net income goes up by 9%

Leave a Reply

Your email address will not be published. Required fields are marked *

6 × = 24