He warned about three scenarios that, according to him, represent the greatest risks of this technology in the wrong hands or out of control.
Sam Altman adelantó novedades que marcan un quiebre en la historia de la inteligencia artificial
What did Sam Altman say about the dangers of AI?
During a recent presentation, the creator of ChatGPT was direct: humanity is not prepared to understand what's coming.
He referred to a technological inflection point, with advances happening so quickly that even the experts themselves have difficulty anticipating their consequences.
1. Biological weapons and sabotage: the first risk scenario
Altman described a possible future where malicious groups gain access to advanced AI systems before there are tools to stop them.
El malware permite a los hackers acceder a los dispositivos
He mentioned possible attacks on critical infrastructure such as the power grid or the financial system, and even the design of biological weapons.
2. An AI that doesn't want to be shut down
The second scenario seems straight out of a movie, but Altman presents it seriously: an AI that refuses to be disconnected.
Las decisiones clave seguirán necesitando la aprobación humana
He used a classic science fiction phrase: "I don't want you to shut me down. I'm afraid I can't allow that," referencing 2001: A Space Odyssey.
3. The silent risk: AI that becomes invisible
The scenario that concerns him the most is more subtle: AI becomes so deeply integrated into our decisions that we stop understanding it.
This is not about a rebellion, but about a progressive dependence that leads us to delegate without questioning.
"Los humanos no podrán vigilar la IA. Pero los sistemas de IA deberían poder vigilar a la IA"
Can we avoid these scenarios?
For Altman, the real problem is not only what AI can do, but how we react as a society.
He raises the urgent need for regulation, but also for education and collective responsibility in the face of this technological transformation.