Home / Tech / Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope

Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope

Spread the love

Elon Musk’s legal effort to break up OpenAI may hinge on how his for-profit subsidiary advances or detracts from the frontier lab’s founding mission of ensuring humanity benefits from artificial general intelligence.

A federal court in Oakland on Thursday heard a former employee and board member say the company’s efforts to push AI products to market threaten its commitment to AI safety.

Rosie Campbell joined the company’s AGI Readiness team in 2021, and He left OpenAI In 2024 after her team disbanded. Another safety-focused team, Super Alignment, closed in the same time period.

“When I joined, the focus on research was common and it was common for people to talk about artificial general intelligence and safety issues,” she testified. “Over time it became more of a product-focused organization.”

During questioning, Campbell acknowledged that significant funding was likely necessary to achieve the lab’s goal of building artificial general intelligence, but said that creating a superintelligent computer model without the correct safety measures in place would not fit with the mission of the organization she originally joined.

Campbell pointed to an incident in which Microsoft published a copy of the company’s GPT-4 model in India through its Bing search engine before the model was evaluated by the company’s Deployment Safety Board (DSB). She said the model itself does not pose a significant risk, but the company needs to “set strong precedents as technology becomes more powerful. We want to have good safety processes that we know are followed reliably.”

OpenAI’s lawyers also had Campbell admit that in her “reflective opinion,” OpenAI’s safety approach is superior to that of xAI, the AI ​​company founded by Musk that was acquired by SpaceX earlier this year.

OpenAI releases ratings for its models and stocks Safety frame Publicly, the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, current president of preparedness, was hired from Anthropic in February. Altman He said Renting will allow him to “sleep better tonight.”

However, the deployment of GPT-4 in India was one of the red flags that prompted the board of directors of the nonprofit OpenAI to briefly fire CEO Sam Altman in 2023. The incident occurred after employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, complained about Altman’s conflict-avoiding management style. Tasha McCauley, a board member at the time, testified about concerns that Altman was not cooperative enough with the board so its unusual structure could do its job.

As discussed by Macauley A It has been widely reported pattern Altman misled the board of directors. Notably, Altman lied to another board member about McCauley’s intention to fire Helen Toner, a third board member who published a white paper with some implicit criticism of OpenAI’s safety policy. Altman also failed to inform the board of the decision to launch ChatGPT publicly, and members were concerned about his failure to disclose a potential conflict of interest.

“We are a nonprofit board, and our mission was to be able to oversee our for-profit organizations,” McCauley told the court. “Our primary way of doing this has been skepticism. We have never had a high degree of confidence to trust that the information being conveyed to us allows us to make decisions in an informed way.”

See also  Lucid Motors sets record as Gravity sales pick up and tax credit expires

However, the decision to fire Altman came at the same time as a tender offer to the company’s employees. As OpenAI employees began to side with Altman and Microsoft worked to restore the status quo, the board eventually reversed course, with members opposed to Altman stepping down, McCauley said.

The nonprofit board’s apparent failure to influence the for-profit organization goes directly back to Musk’s case that OpenAI’s transformation from a research organization into one of the world’s largest private companies violates the implicit agreement of the organization’s founders.

David Schizer, the former dean of Columbia Law School who is paid by Musk’s team to serve as an expert witness, echoed McCauley’s concerns.

“OpenAI has emphasized that safety is a core part of its mission, and will prioritize safety over profits,” Schizer said. “Part of that is taking safety seriously. If something has to go through a safety review, it has to happen. What matters is the process issue.”

Since AI is deeply ingrained in for-profit companies, the issue goes beyond just one lab. McCauley said OpenAI’s internal governance failure should be a reason to embrace stronger government regulation of advanced AI.[if] It all comes down to one CEO making those decisions, and we have the public good at stake, and that’s very suboptimal.

When you buy through links in our articles, we may earn a small commission. This does not affect our editorial independence.

Source link