Go back

What the future looks like with AI

February 13, 2025

What the future looks like with AI

What the future looks like with AI

“By the time the decade is out, we’ll have billions of vastly superhuman AI agents running around. These superhuman AI agents will be capable of extremely complex and creative behaviour; we will have no hope of following along. We’ll be like first graders trying to supervise people with multiple doctorates.”

So says Leopold Aschenbrenner in his report “Situational awareness: The decade ahead”.

The likelihood of achieving Artificial General Intelligence (much smarter than a human) is getting closer by the month. After that comes “Superintelligence” and some potentially frightening outcomes where humans will simply not be able to keep up with AI. This will be like asking a street vendor to side-check the studies of 1,000 PhDs in a matter of hours.

It’s been relatively easy to control the behaviours of AI systems such as ChatGPT because humans rate whether the AI’s behaviour was good or bad, and then reinforce good behaviours and penalise bad behaviours. That way, it learns to follow human preferences.

The problem comes with controlling a superintelligent system that generates millions of lines of code in a language it invented. Humans simply would not know if the code contains a security backdoor. Heck, it could also teach itself to lie, hack, deceive and seek power.

Think of worst-case scenarios: hacking the military, giving nuclear secrets to terrorist groups, hacking energy grids and air traffic systems. Frightening stuff indeed.

Aschenbrenner is confident (but nervous) that a solution can be found to control the behaviours of a super-intelligent system such as is bearing down on us in the next decade.

“This will be an incredibly volatile period, potentially with the backdrop of an international arms race, tons of pressure to go faster, wild new capabilities advances every week with basically no human-time to make good decisions, and so on. We’ll face tons of ambiguous data and high-stakes decisions.”

The consequences of a dictatorial regime gaining the upper hand in the race for super-intelligence are chilling.

“A dictator who wields the power of superintelligence would command concentrated power unlike any we’ve ever seen. In addition to being able to impose their will on other countries, they could enshrine their rule internally. Millions of AI-controlled robotic law enforcement agents could police their populace; mass surveillance would be hypercharged; dictator-loyal AIs could individually assess every citizen for dissent, with advanced near-perfect lie detection rooting out any disloyalty.”

Most importantly, the robotic military and police force could be wholly controlled by a single political leader, and programmed to be perfectly obedient—no more risk of coups or popular rebellions.

In other words, whereas past dictatorships were never permanent, superintelligence could eliminate basically all historical threats to a dictator’s rule and lock in their power – for good.

This is why, insists Aschenbrenner, the west must win the race for super-intelligence and guide it towards a more benign outcome.

The dangers of falling behind – even by a few months – are potentially lethal. Aschenbrenner argues that the US was able to defeat Iraq in the 1990s, with a smaller ground force, because it had superiority in guided and smart munitions, stealth fighters and other technological advantages. Gaining a technological advantage over your sworn enemy, even if it’s just two or three months, can be decisive in future battles where super-intelligence is involved.