OpenAI Launches Security Committee Amid Ongoing Criticism
2024-5-29 03:1:11 Author: securityboulevard.com(查看原文) 阅读量:8 收藏

OpenAI has a new Safety and Security Committee in place fewer than two weeks after disbanding its “superalignment” team, a year-old unit that was tasked with focusing on the long-term effects of AI.

In a blog post Tuesday, the Microsoft-backed company said the new committee will comprise CEO Sam Altman and board of director members Bret Taylor – the board’s chair – Adam D’Angelo, and Nicole Seligman. The group “will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations,” OpenAI wrote.

The new committee comes in the wake of two key members of the Superalignment team – OpenAI co-founder Ilya Sutskever and AI researcher Jan Leike – left the company. Leike on X (formerly Twitter) announced Tuesday that he was joining OpenAI rival Anthropic to join another “superalignment” committee.

“My new team will work on scalable oversight, weak-to-strong generalization, and automated alignment research,” wrote Leike, who recently criticized OpenAI, reportedly writing that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Continuing Turmoil

The shutting down of the superalignment team and the departure of Sutskever and Leike – and now the creation of an executive-led safety and security group – are only the latest moments in an ongoing in-house drama that burst into public view with Altman’s firing as CEO by members of the board at the time saying he had not been open with them and reports that some in the company – including Sutskever – were pushing the development of OpenAI’s technologies too quickly, with the innovation outpacing the development of controls necessary to ensure that AI can be used safely.

However, less than a week later, Altman was back as CEO, with a revamped board in place and some executives being let go. Two of the former board members, speaking to The Economist, said they were concerned that OpenAI – as well as high-profile AI companies like Microsoft and Google – is innovating to rapidly to take into account adverse effects that could come with the technology.

Helen Toner, with Georgetown University’s Center for Security and Emerging Technology, and tech entrepreneur Tasha McCauley argued that AI companies can’t self-govern and that government oversight is needed.

The rollout of AI can’t be controlled only by private companies, Toner and McCauley said.

Security and Safety Concerns

AI – particularly in this relatively new era of generative AI – has generated almost as much security and safety concerns as it has excitement about its potential. Those concerns span everything from bias and discrimination in their outputs to hallucinations – made-up answers that are wrong – data security leaks and sovereignty compliance worries, and the use of the technology by threat groups.

It’s unclear whether the new Safety and Security Committee will ease any of those concerns. Ilia Kolochenko, co-founder and CEO of IT security firm ImmuniWeb, called OpenAI’s move welcome but questioned its societal benefits.

“Making AI models safe, for instance, to prevent their misuse or dangerous hallucinations, is obviously essential,” Kolochenko wrote in an email to Security Boulevard. “However, safety is just one of many facets of risks that GenAI vendors have to address.”

One area that needs even more attention than the safety of AI concerns the unauthorized collection of data from across the internet for training LLMs and the “unfair monopolization of human-created knowledge,” he argued.

“Likewise, being safe does not necessarily imply being accurate, reliable, fair, transparent, explainable and non-discriminative – the absolutely crucial characteristics of GenAI solutions,” Kolochenko noted. “In view of the past turbulence at OpenAI, I am not sure that the new committee will make a radical improvement.”

OpenAI said the new committee’s first step will be evaluating and improving OpenAI’s processes and safeguard over 90 days and then bring recommendations back to the full board, with OpenAI publicly sharing the recommendations that were approved.

The Worry About AGI

The company noted that the committee comes in just as OpenAI begins to train its next frontier model that will succeed GPT-4 and bring the company even closer to achieved artificial general intelligence (AGI), the point where AI systems can learn, understand, and perform as well as humans, only much faster.

Reaching that point has been a destination for Altman and OpenAI, though it brings up myriad concerns about what it could mean for societies and humanity itself. In a blog post last year, Altman wrote that AGI “help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.”

He added that it also “would also come with serious risks of misuse, drastic accidents and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

Leike reportedly earlier this month said that creating “smarter-than-human machines is an inherently dangerous endeavor,” adding that OpenAI is “shouldering an enormous responsibility on behalf of all of humanity.”

Frontier models are the most cutting edge in AI that are designed push the evolution of AI systems forward. OpenAI in its blog post said that “while we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/05/openai-launches-security-committee-amid-ongoing-criticism/
如有侵权请联系:admin#unsafe.sh