16.1 C
New York

OpenAI Forms New Safety Committee Led by CEO Sam Altman, Raising Concerns About Independence

OpenAI, the research lab known for its powerful AI tools like ChatGPT, announced the creation of a new “Safety and Security Committee” on May 28, 2024. This committee led by CEO Sam Altman alongside board members, will be responsible for making recommendations on critical safety decisions for OpenAI’s projects and operations over the next 90 days.

The formation of this committee comes on the heels of several high-profile departures from OpenAI’s previous safety team, including co-founder Ilya Sutskever and Jan Leike. Both Sutskever and Leike expressed concerns that safety had become a secondary focus at OpenAI in favor of product development. Leike co-led the disbanded “Superalignment Team” which aimed to develop methods for controlling highly advanced AI.

The committee itself is a blend of internal and external expertise. Alongside Altman, it includes OpenAI board members and directors like Bret Taylor (of Salesforce) and Adam D’Angelo (of Quora). Additionally, technical and policy experts from within OpenAI will hold key roles, focusing on areas like preparedness, safety systems, and alignment science (ensuring AI goals align with human values).

Critics argue that a truly independent safety body would be more effective in addressing potential conflicts of interest. Can Altman objectively prioritize safety when there’s pressure to deliver commercially viable AI products? 

Some people have been worried about the way OpenAI is dealing with AI safety, especially after some members quit its safety team recently. Others are concerned that the committee’s make-up, which includes both internal and external members, could give rise to issues touching on conflict of interest and total independence as regards security.

But there’s another way to look at it. Maybe Altman’s leadership signifies a deeper shift. Perhaps it shows OpenAI acknowledging that safety isn’t a separate concern, but needs to be embedded within their core operations. 

From bias to misuse, the stakes are high. OpenAI’s approach is just one data point in this ongoing debate.  Only time will tell if their revamped safety structure, led by Altman, fosters genuine safety or simply creates an illusion of progress. 

The announcement coincides with OpenAI’s testing of a new AI model, though the company has not confirmed if it’s the highly anticipated GPT-5. This news follows recent controversy surrounding the voice used for ChatGPT’s successor, “Sky.” The voice bears a striking resemblance to actress Scarlett Johansson, who has publicly stated she never authorized its use. While OpenAI denies any intentional likeness and claims to have contacted Johansson after casting the voice actor, the incident highlights ongoing concerns about transparency and responsible development at OpenAI.

Subscribe

Related articles

Meta’s Llama 3.2 with vision is here

Meta has released Llama 3.2, the latest in the...

Jony Ive and Altman partner on secretive AI device

Jony Ive, the genius designer behind Apple’s greatest products,...

Author

Abhinandan Jain
Abhinandan Jain
Abhinandan, an e-commerce student by day and a tech enthusiast by night, became a part of Alltech through our Student Skill Development Initiative. With a deep fascination for emerging markets like AI and robotics, he is a passionate advocate for the transformative potential of technology to make a positive global impact. Committed to utilizing his skills to further this cause.