consultantsreview logo

Consultants Review Magazine

OpenAI Co-founder, Ilya Sutskever, has raised $1 billion for his new enterprise, Safe Superintelligence

By Consultants Review Team Thursday, 05 September 2024

Safe Superintelligence (SSI), a newly created AI startup co-founded by Ilya Sutskever, OpenAI's former head scientist, has received a staggering $1 billion in investment to promote the creation of safe artificial intelligence systems that outperform human capabilities.

The startup, which now has a lean workforce of ten, intends to use the funds to expand its processing power and recruit top talent in AI research and engineering. SSI will operate from two hubs: Palo Alto, California, and Tel Aviv, Israel. While SSI has declined to divulge its valuation, people close to the transaction estimate it is around $5 billion.

This significant investment demonstrates sustained trust in elite AI talent, even as funding for basic AI research declines overall. Many AI company founders have been enticed away by tech behemoths, adding to this trend.

Prominent investors back SSI's vision

The fundraising round was led by notable venture capital companies Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. NFDG, an investment partnership managed by Nat Friedman and SSI CEO Daniel Gross, was also involved.

"It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," reported Gross.

Addressing AI Safety Concerns

AI safety, a critical component of AI research, seeks to keep AI systems from causing damage or working against human interests. This issue has gained traction due to worries about the potential for rogue AI to pose existential dangers to humans.

Sutskever, a very significant player in AI, co-founded SSI in June alongside Gross, former head of Apple's AI programs, and Daniel Levy, a former OpenAI researcher. The team is concentrating on assembling a small, highly trusted core of academics and engineers, with a significant emphasis on cultural compatibility and similar ideals.

A New Direction for Sutskever

Sutskever, who played a crucial role in the development of OpenAI's sophisticated AI models, explains why he started SSI: "I identified a mountain that's a bit different from what I was working on."

Sutskever's exit from OpenAI came after a stormy time that included the attempted ouster of CEO Sam Altman, which he originally backed but then reversed. Sutskever's "Superalignment" team, which was tasked with assuring AI's alignment with human ideals, was dismantled when he left OpenAI.

Sutskever, an early supporter of the "scaling hypothesis" that huge computer capacity drives AI model advancements, stated that SSI will take a different approach to scaling.

"Everyone simply says scaling theory. Everyone forgets to ask what we're scaling," he remarked. "Some people can work extremely long hours and go along the same road faster. It's not necessarily our style. But if you do something unusual, you can accomplish something unique."

Current Issue