News

Yet that, more or less, is what is happening with the tech world’s pursuit of artificial general intelligence ( AGI ), ...
The future of AI in 2025 is set to bring transformative advancements, including humanoid robots, infinite-memory systems, and ...
The new company from OpenAI co-founder Ilya Sutskever, Safe Superintelligence Inc. — SSI for short — has the sole purpose of creating a safe AI model that is more intelligent than humans.
As AI approaches human ... impact will “exceed that of the Industrial Revolution,” but it warns of a future where tech firms race to develop superintelligence, safety rails are ignored ...
Ilya Sutskever, OpenAI's former chief scientist, has launched a new company called Safe Superintelligence (SSI), aiming to develop safe artificial intelligence systems that far surpass human ...
The company is incorporated as Safe Superintelligence Inc., a nod to its goal of prioritizing AI safety in its development efforts. “We approach safety and capabilities in tandem,” reads a ...
While the focus of global AI safety discussion has been on the risk of superintelligence and high-risk foundation models, the new guidelines also cover current generation and more narrow AI.
According to Reuters, which broke the news, Sutskever’s company Safe Superintelligence Inc. (SSI) just earned the massive check in cash and is now valued at $5 billion.
You may like Meta’s new 'Superintelligence' team could upend the entire AI industry — here's why OpenAI should be worried; OpenAI wants to be your next Google — here’s how close it is ...