Departures from OpenAI

Ilaya Sutskever, a scientist and Jan Leike, an ML researcher from Open AI resigned in May 2024. They were in a team whose job it was to make sure humans are safe from OpenAI’s superintelligence. It is not known whether they will be replaced by other people.

Ilaya Sutskever, in fact, is a cofounder of OpenAI. Of course, while quitting he admitted that it was an honour and privilege to have worked together with Altman and crew. He expressed the confidence that OpenAI will build AGI that is both safe and beneficial.

Jan Leike’s departure was more abrupt.

Both were members of the super alignment team at OpenAI. Though mankind requires scientific breakthroughs to steer and control AI systems smarter than human beings, there is a need for a team of super alignment (July 2023). The team will ensure that AI systems much smarter than humans will follow human intent. OpenAI has recognized in July 2023 that there are no controls in place. The intended superintelligent AI could go rouge. The current RL HF relies on humans’ ability to supervise AI. However, it is a moot point whether humans could supervise AI systems much smarter than them. The current alignment techniques may not scale to superintelligence. There is a need for new scientific and technical breakthroughs.

Prior to his employment at OpenAI, Leike was working with Google’s DeepMind. He was dedicated to keeping humans safe from the superintelligence. Leike stated the alignment problem like this — when machines do not act in accordance with the human intentions. This problem has to be solved, and we should see what is needed to solve it. Leike wrote in March 2022 whether the alignment problem is located in the space humans can solve or cannot. The effort to solve the whole problem can lead us to something we cannot reach. A less ambitious goal can lead us ultimately to a solution — a minimal viable product (MVP) for alignment.

print

Leave a Reply

Your email address will not be published. Required fields are marked *