#global
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product”: creating a safe and powerful AI system. The announcement describes SSI as a startup that “approaches safety and capabilities in tandem,” allowing the company to advance its AI system rapidly while still prioritizing safety. It also highlights the external pressures faced by AI teams at companies like OpenAI, Google, and Microsoft, stating that SSI’s “singular focus” enables it to avoid “distraction by management overhead or product cycles.”
“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.” In addition to Sutskever, SSI is co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked as a member of the technical staff at OpenAI. Last year, Sutskever led the push to oust OpenAI CEO Sam Altman. Sutskever left OpenAI in May and hinted at the start of a new project. Shortly after Sutskever’s departure, AI researcher Jan Leike announced his resignation from OpenAI, citing safety processes that have “taken a backseat to shiny products.” Gretchen Krueger, a policy researcher at OpenAI, also mentioned safety concerns when announcing her departure.
As OpenAI pushes forward with partnerships with Apple and Microsoft, it is unlikely that SSI will engage in similar collaborations anytime soon. During an interview with Bloomberg, Sutskever stated that SSI’s first product will be safe superintelligence, and the company “will not do anything else” until then. This focused approach aims to ensure that safety remains the top priority, free from the pressures of rapid commercialization and partnership demands.