OpenAI logo next to a man in a suit looking forward
ARGENTINA

OpenAI's CEO fears that AI will control our most important decisions

Sam Altman warned about the three most dangerous scenarios posed by the advancement of artificial intelligence

Sam Altman, CEO of OpenAI, has once again raised alarms about the direction artificial intelligence is taking.

He warned about three scenarios that, according to him, represent the greatest risks of this technology in the wrong hands or out of control.

Short dark-haired man wearing a jacket and light-colored jersey looking forward with a blurred blue background
Sam Altman has once again raised alarms about the direction AI is taking | La Derecha Diario

What did Sam Altman say about the dangers of AI?

During a recent presentation, the creator of ChatGPT was direct: humanity is not prepared to understand what's coming.

He referred to a technological inflection point, with advances happening so quickly that even the experts themselves have difficulty anticipating their consequences.

1. Biological weapons and sabotage: the first risk scenario

Altman described a possible future where malicious groups gain access to advanced AI systems before there are tools to stop them.

Hooded person working on a computer with multiple screens in a dark environment.
Altman described a possible future in which malicious groups gain access to AI systems | La Derecha Diario

He mentioned possible attacks on critical infrastructure such as the power grid or the financial system, and even the design of biological weapons.

2. An AI that doesn't want to be shut down

The second scenario seems straight out of a movie, but Altman presents it seriously: an AI that refuses to be disconnected.

An autonomous artificial intelligence with lights and cables connected to its head.
2. An AI that doesn't want to be shut down | La Derecha Diario

He used a classic science fiction phrase: "I don't want you to shut me down. I'm afraid I can't allow that," referencing 2001: A Space Odyssey.

3. The silent risk: AI that becomes invisible

The scenario that concerns him the most is more subtle: AI becomes so deeply integrated into our decisions that we stop understanding it.

This is not about a rebellion, but about a progressive dependence that leads us to delegate without questioning.

Person looking at multiple screens with graphics and data about Artificial Intelligence and its dangers according to former Google representative, Eric Schmidt.
3. The silent risk: AI that becomes invisible | La Derecha Diario

Can we avoid these scenarios?

For Altman, the real problem is not only what AI can do, but how we react as a society.

He raises the urgent need for regulation, but also for education and collective responsibility in the face of this technological transformation.

➡️ Argentina

More posts: