In a setback to OpenAI's attempts to safeguard the safe development of superintelligent AI, many key Superalignment team members recently resigned.
Several members of OpenAI's Superalignment team, including co-lead Jan Leike, quit this week, citing resource distribution issues. According to a team member, OpenAI promised the group 20% of its computational resources, but they often received only a fraction, which hindered their study, according to TechCrunch.
On Friday, Leike, who worked at DeepMind and developed ChatGPT, GPT-4, and InstructGPT, explained his resignation, stating that he and OpenAI finally "reached a breaking point," as he had been "disagreeing" for some time with the OpenAI management regarding "core priorities." He claimed that future AI model preparation ignored security, monitoring, and social effects.
Leike and OpenAI co-founder Ilya Sutskever, who resigned last week, directed the July-formed Superalignment team. Within four years, the team aimed to govern superintelligent AI technically. The team of scientists and engineers from several departments conducted safety studies and awarded millions to external researchers.
On X, Leike warned that creating machines more intelligent than humans is "an inherently dangerous endeavor."
There have been repeated warnings. But is anyone really listening?
To read more, click here.