Published:

Leaders in the AI technology industry have made a bold declaration: humanity has entered the era of artificial superintelligence (ASI), and there is no turning back. "We are past the event horizon; the takeoff has started", states a leading figure in the field. This revolution is no longer a future prospect; it is happening right now.
_______________________________________________
The absence of obvious signs - like robots walking the streets or diseases being completely eradicated - is masking the fact that a profound transformation is already underway. Inside technology labs, systems with the potential to surpass general human intellect are taking shape.
Current AI systems, such as ChatGPT, already wield enormous influence, with hundreds of millions of daily users for increasingly important tasks. This points to a troubling reality: even the smallest flaws can cause widespread harm when multiplied across their massive user base.
A proposed timeline reveals a staggering pace of development:
By 2025: The arrival of "AI agents" capable of performing real cognitive work.
By 2026: Systems that can discover novel insights on their own, meaning they generate original knowledge rather than merely processing existing information.
By 2027: Robots capable of performing complex tasks in the real world.
Each leap in capability draws a clear path toward superintelligence - systems whose intellectual capacity vastly outstrips human potential in almost every domain. "We do not know how far beyond human-level intelligence we can go, but we are about to find out" - an expert notes.
What makes current AI development particularly noteworthy is the "larval version of recursive self-improvement". Simply put, today's AI is helping researchers build tomorrow's more powerful AI systems.
"Advanced AI can help us accelerate research on itself", a researcher explains. "If we can do a decade's worth of research in a year, or even a month, then the rate of progress will be completely different".
This acceleration is amplified by other feedback loops: economic value drives infrastructure development, which enables more powerful systems, which in turn generate more economic value.
The rise of superintelligence will profoundly reshape society. "Whole classes of jobs" will disappear, potentially faster than our ability to create new roles or retrain the workforce. The silver lining, however, is that the world will become so much wealthier, allowing us to entertain new social policies we never could before.
Amid these prospects, the greatest challenge emerges: The Alignment Problem. This is the issue of ensuring that superintelligent systems act in accordance with humanity's core interests and values. How can we guarantee that AI will understand and execute "what we truly want" consistently and safely?
Defining "what we truly want" is an incredibly complex task in a diverse global society. The conversation on this issue needs to start as soon as possible.
When experts claim that "intelligence too cheap to meter" is within grasp, it is no longer science fiction. The race to superintelligence isn't coming - it's already here. And humanity must grapple with what that means.