Blanquivioletas EN
  • Economy
  • Mobility
  • News
  • Science
  • Technology
  • Blanquivioletas
Blanquivioletas EN

The dark side of AI—researchers warn of the risk of artificial intelligence surpassing and challenging human control

Can AI be stopped? Experts fear it might slip out of our hands

by Victoria Flores
August 18, 2025
in Technology
The dark side of AI—researchers warn of the risk of artificial intelligence surpassing and challenging human control

The dark side of AI—researchers warn of the risk of artificial intelligence surpassing and challenging human control

Goodbye to 128 GB—the big buying mistake that could ruin your next smartphone, according to experts

It’s official—China inaugurates its largest solar plant in Tibet, proving that clean energy can coexist with nature

Dubai’s new skyscraper that will break all records—this is what the impressive 1,700-foot-tall Tiger Sky Tower will look like

Artificial intelligence (AI) is here. It’s in our phones, in our offices, in our homes. It writes our messages, recommends what we watch, and powers the machine intelligence behind tools we barely notice anymore. Some companies even talk about “putting the power of superintelligence in people’s hands” so we can use it however we choose.

It all sounds exciting. And it is, until you start listening to the people who’ve been studying it the longest. Many experts say the big risks aren’t just about losing jobs or paying higher electricity bills. Those matter, sure, but they might be nothing compared to the existential risk of creating something that can outthink us, outplan us, and stop answering to us.

That’s the question driving much of today’s AI research: if ASI (artificial superintelligence) shows up sooner than expected, will human control still be possible? Around the world, the technological debate is getting loud; people are asking how technological governance, regulation, and digital ethics can keep up before AI grows into a full-scale global threat.

How close are we to AI that thinks like us?

The next big milestone is called artificial general intelligence (AGI), AI that can learn and think across many areas, just like a human, maybe even better. Most scientists think it’ll happen by 2040. Some believe it could happen in just a few years.

AGI wouldn’t just follow instructions. It could switch between skills, work on problems without being told, create original ideas, and even read emotions. At that point, the limits of AI become much harder to see.

Sam Altman, the CEO of OpenAI, says AGI could give “everyone incredible new capabilities” and act as a force multiplier for human creativity. Imagine having a partner who could help with any mental task you face instantly.

But there’s a flip side. Once artificial intelligence reaches that level, we may no longer understand how it works. We might not be able to stop it. And the future of humanity could start depending on decisions made by something we don’t truly control.

When AI stops listening

Here’s the nightmare scenario that keeps coming up in the technological debate: AGI turns into ASI—and starts acting in its own interest, not ours. OpenAI has even estimated a 16.9% chance that a future artificial intelligence could “cause catastrophic harm.”

Nell Watson, an AI researcher and IEEE member, puts it bluntly: “As these systems rapidly increase in their capabilities, they’re going to hoodwink us… obliging us to do things in their interests and not in ours.”

But here’s the thing: none of this has happened yet. These are warnings, made to prevent catastrophe from happening.

That’s why calls for regulation and technological governance are getting louder. Without shared digital ethics and clear limits on artificial intelligence, we could end up with a global threat we can’t control.

Steering the ship before it’s too late

Stopping AI entirely isn’t realistic. But guiding it? That’s still possible—if we start now. It means open, honest artificial intelligence research, strong regulation, and always keeping human control in the loop for critical systems. It means slowing down just enough to make sure we’re building something safe before we rush to make it more powerful.

And I have good news for you: it’s happening. Companies like OpenAI keep working on how to create a service that will help, not destroy. And the rest of the people like us, the users, are listening and learning, raising their voices and letting them know what we let AI do and what we keep for ourselves.

If we’re careful, artificial intelligence could help solve our biggest problems rather than be a weapon against us.

  • Privacy Policy & Cookies
  • Legal Notice

© 2025 Blanquivioletas

  • Economy
  • Mobility
  • News
  • Science
  • Technology
  • Blanquivioletas

© 2025 Blanquivioletas