Safe Superintelligence (SSI), a new artificial intelligence startup founded by Ilya Sutskever (former chief scientist of OpenAI), has raised $1 billion in funding. The company, which has only been in existence for three months, was valued at $5 billion.
SSI intends to create systems with artificial intelligence that exceed human capabilities while ensuring their safety.
By the way:
📌The AI problem is not yet solved (there are a number of failed timeline predictions for fully unmanned cars and for fully unmanned software code creation) and experts have already started looking for workarounds, suspecting that the prevailing direction of AI development has been chosen unsuccessfully and is leading to a dead end
📌The goal of AGI (artificial general intelligence, not inferior to human) has not even been reached in development yet.
...And then immediately superhuman intelligence is promised...