Sign up to our newsletter Join our membership and be updated daily!

OpenAI launches safety committee as It commences training new model

OpenAI launches safety committee as It commences training new model
OpenAI launches safety committee as It commences training new model

OpenAI announced on Tuesday the formation of a Safety and Security Committee, to be led by board members including CEO Sam Altman.

Other directors, Bret Taylor, Adam D’Angelo, and Nicole Seligman, will also play key roles in the committee’s leadership, as stated in a company blog post.

Concerns about safety have arisen with OpenAI’s chatbots, powered by generative AI, which can engage in human-like conversations and generate images from text prompts.

The committee’s primary responsibility will be to provide recommendations to the board regarding safety and security matters related to OpenAI’s projects and operations.

“A new safety committee signifies OpenAI completing a move to becoming a commercial entity, from a more undefined non-profit-like entity,” noted D.A.

Davidson managing director Gil Luria. “That should help streamline product development while maintaining accountability.”

Former Chief Scientist Ilya Sutskever and Jan Leike, leaders of OpenAI’s Superalignment team, responsible for ensuring AI remains aligned with its intended objectives, departed the company earlier this month.

OpenAI dissolved the Superalignment team in May, less than a year after its establishment, with some team members reassigned to different groups, CNBC reported following the notable exits.

Starting its tenure with a pivotal mission, the committee aims to evaluate and fortify OpenAI’s existing safety protocols within the next 90 days. Subsequently, it will present its recommendations to the board for review.

Following the board’s deliberation, OpenAI plans to publicly disclose any implemented recommendations, as outlined by the company.

Additional members of the committee include newly appointed Chief Scientist Jakub Pachocki and Matt Knight, who lead security efforts.

In addition to its internal committee, OpenAI will seek guidance from external experts, including Rob Joyce, a former U.S. National Security Agency cybersecurity director, and John Carlin, a former Department of Justice official.

While OpenAI didn’t disclose specific details about its upcoming “frontier” model, it mentioned that this endeavour aims to elevate its systems to new heights of capability as part of its journey towards achieving Artificial General Intelligence (AGI).

Earlier in May, the company unveiled a new AI model capable of engaging in realistic voice conversations and interactions across text and images.

YOU MAY ALSO READ: Netherlands leads air defence missiles supply effort to Ukraine

Share with friends