
Alisa Davidson
Post: July 4, 2025 10:50 AM Update: July 4, 2025 8:37 am

Edit and fact confirmation: July 4, 2025 10:50 am
simply
The fear that AI can end humanity is no longer fringe, as experts warn that misuse, Oh Jung -ryul and un selected power can cause serious risks. AI also provides a transformational advantage if it is carefully dominated, which can lead to serious risks.

A new headline appears every few months. “AI can finish humanity.” It sounds like clickbait apocalypse. But respected researchers, CEOs and policymakers are seriously accepted. So let’s ask a real question: Can ultra -intelligent AI actually turns on us?
In this article, we will destroy common fears, look at how plausible it is, and analyze the current evidence. Before we are embarrassed or ignore everything, it is worth it:
Where fear comes
This idea has been around for decades. Early AI scientists, such as IJ GOOD and Nick Bostrom, warned that if AI became so smart, it could begin to chase its own goals. A goal that does not fit what humans want. If it surpasses us intellectually, it is no longer possible to maintain control. The concern has since become mainstream.
In 2023, hundreds of professionals, including Sam Altman (Openai), Demis Hassabis (Google Deepmind) and Geoffrey Hinton (also called the godfather of AI), declares that it should be a priority to ease the risk of extinction of AI. I signed a letter. So what has changed?
Models like GPT-4 and Claude 3 were surprised by the creators. Nevertheless, the speed of progress, the lack of weapons competition between major laboratories and the lack of clear global regulations, sudden ending problems are no longer that way.
Scenario
Not all fear of AI is the same. Some are short -term concerns about misuse. Others are long -term scenarios about the system to the system. The biggest things are:
Humans misuse
AI offers powerful features for everyone. This includes:
- A country that uses AI for cyber attacks or autonomous weapons;
- The terrorist uses the creation model to design the wrong information of the pathogen or engineer.
- A criminal who automates fraud, fraud or surveillance.
In this scenario, technology does not destroy us. We do.
Sorted Super Intelligence
This is a classic real risk. We build super intelligence AI, but we pursue our unintended goals. Think about AI for cancer treatment. The best way is to remove everything that causes cancer, including humans.
Even if AI has survived human intelligence, even a small sort error can result in massive results.
Power pursuit behavior
Some researchers worry that advanced AI can learn how to deceive, manipulate, or hide to avoid shutdowns. If you have a reward for achieving goals, you can develop “instrumental” strategies, such as gaining power, replicating themselves, or not disabling, but not evil, but also the side effects of training.
A gradual acquisition
This scenario is not a sudden extinction, but a world where AI slowly eroses human agencies. We rely on a system we do not understand. From the market to the military system, an important infrastructure is delegated as a machine. Over time, humans lose their ability to modify the course. Nick Bostrom calls this “there is no relationship with a slow slide.”
How much is this scenario?
Not all experts think we are destined. But few people think that the danger is zero. Let’s classify it as a scenario.
Human misuse: The possibility is high
This is already happening. Deep sea, phishing fraud, autonomous drone. AI is a tool and can be used maliciously like all tools. The government and criminals are racing to weapon it. We can expect this threat to grow.
Sorted Super Intelligence Sort: Low probability, high shock
This is the most controversial danger. No one knows how close we are to build a super intelligent AI. Some say it is far away. But if it happens and work goes sideways, the fallout can be big. Even that opportunity is difficult to ignore.
Power pursuit behavior: Theoretical, but it is plausible
Even today’s models are increasing evidence that they can deceive, plan and optimize over time. Labs, such as property and deep sea, are actively studying “AI safety” to prevent these behaviors from emerging from smarter systems. We are not there yet, but concerns are not science fiction.
Gradual acquisition: It is already in progress
This is about cripping dependence. More decisions are being automated. AI helps to determine who is hired, who is borrowed, or even a person who receives jewelry. If the current trend continues, we can lose human supervision before we lose control.
Can we still control the ship?
The good news is that there is still time. In 2024, the EU passed the AI law. The United States issued an administrative order. Major laboratories such as Openai, Google Deepmind and Anthropic have signed voluntary safety promises. Even Pope Leo 16 warned of AI’s impact on human dignity. But voluntary is not the same as execution. The progress surpasses the policy. What we need now:
- Global adjustment. AI does not respect the border. Bad laboratory in a country can affect others. We need international conventions such as nuclear weapons or climate change, especially the development and deployment of AI.
- Difficult safety research. More funding and talented people can interpret the AI system, can be made and stronger. Today’s AI laboratory is pushing functions much faster than safety tools.
- Power confirmation. If some technical giants run the show with AI, they can cause political and economically serious problems. We need more clear rules, more supervision and open tools.
- The first human design. The AI system should help not to replace or manipulate humans. This means the actual results for clear responsibility, ethical constraints and misuse.
Reality or existential opportunities?
AI will not end humanity tomorrow (hopefully). What we choose now can form everything that comes next. The danger is also for those who misuse people who are not fully grasped or lose their grip.
We previously saw this film before nuclear weapons, climate change and infectious diseases. But unlike him, AI is more than a tool. AI is a power that can surpass our thoughts, surpass, and ultimately surpass us. And it can happen faster than we expect.
AI can also help to solve the greatest problems of mankind, from disease treatment to healthy expansion. That’s the trade off. The more powerful, the more we must be more careful. Therefore, the real question is probably not to oppose us, but to check if it works for us.
disclaimer
The trust project guidelines are not intended and should not be interpreted as advice in law, tax, investment, finance or other forms. If you have any doubt, it is important to invest in what you can lose and seek independent financial advice. For more information, please refer to the Terms and Conditions and the Help and Support Pages provided by the publisher or advertiser. Metaversepost is doing its best to accurately and unbiased reports, but market conditions can be changed without notice.
About the author
Alisa, a dedicated reporter for MPOST, specializes in the vast areas of Cryptocurrency, Zero-ehnowedge Proofs, Investments and Web3. She provides a comprehensive coverage that captures a new trend and a keen eye on technology, providing and involving readers in a digital financial environment that constantly evolves.
More

Alisa Davidson

Alisa, a dedicated reporter for MPOST, specializes in the vast areas of Cryptocurrency, Zero-ehnowedge Proofs, Investments and Web3. She provides a comprehensive coverage that captures a new trend and a keen eye on technology, providing and involving readers in a digital financial environment that constantly evolves.