Crypto Gloom

AGI is coming, but how fast?

AGI is coming, but how fast?

AGI is everywhere. Some say it’s been five years, and others call it fantasy. Most people can’t even agree with what they mean. Still, the question remains: how close is we actually?

The answer depends on how you define it. For some, AGI is a system that can do what humans can do. It is only a model that can solve a wide range of problems without requiring rehabilitation.

Either way, something has changed. Previously, AI wrote and painted emails. But now we are using reasoning, planning and tools itself. The change is why many people see AGI more seriously.

Where we are now

The AI model we have today is not AGI, but it is getting closer to what it looks like. At least in some way.

Models like GPT-4, Claude 3 and Gemini 1.5 have long conversations, follow complex guidelines, and can use external tools such as browsers or python sandboxes. Some may be reflected in their own output, or initial stages, primitive forms of plans or self -corrections.

In the test, this system now surpasses most humans in lawyer exams, mathematics Olympiads and SATs. They are still struggling with consistency, abstraction reasoning and physical interaction. But their abilities are growing rapidly in reasoning, memory and tools.

Openai’s Sam Altman called the GPT-4 a “slightly embarrassing” stage for much more powerful. Claude 3 says that some areas are approaching “early graduate students” levels. Deepmind, META and XAI are all working on a new model that you think can be a game change.

So today we have no AGI. But we are not the same place 18 months ago.

Other possible paths to AGI

You can barely agree with what AGI is. There is no single roadmap for AGI. But most discussions are divided into three extensive scenarios.

The same

Some experts believe that it will simply expand the current model to reach AGI through training on bigger, faster and better data. The idea is that we are already on the right way. And it is just time problem (and calculation). This is often called a scaling hypothesis. Other people in Ilya Sutskever and Openai expressed their cautious faith in this approach.

Smarter

Others think we need completely new model design. Perhaps it may be something that mimics in reasoning, planning or learning over time. This can mean a hybrid system that mixes deep sea learning, symbolic reasoning, memory modules or decision -making trees. Instead of predicting it, think of it as a “thinking” model.

Use of multiple agent systems or tools

Some people argue that AGI is not a single model at all, but a collaboration, reason and AIS network that acts together, perhaps other platforms with their own specialization. Others think that the heights can provide access to tools such as search engines, calculators, or robotics, so that they can expand their abilities beyond text prediction.

Each path has a conflict. Scaling is simple, but hardware and data restrictions are executed. The new architecture works better but not proven. And multiple agent systems raise new questions about adjustment and control.

How close are we now?

We are close at any time, but still not there. Today’s top models, such as GPT-4O, Claude 3, Gemini 1.5 and LLAMA 3, are more competent, multimoal and are generally useful than before. They can write code, pass difficult exams, solve reasoning puzzles, and have a long conversation. But they still miss the main characteristics we can really expect from “general”.

  • They do not actually understand the world. They may sound smart, but often they have a hallucination or failing to have a simple logical test. Because they do not build a real model in the world, but work by predicting the pattern of data.
  • They are having trouble with long -term memories and plans. Most of the current AI models work from moment to moment. They cannot set goals, reflect deeply, or work stably for a few days or weeks.
  • They are not consistent. If you ask the same question twice in the same model, you can get two very different answers. It is not how much intelligence that reliable should act.
  • They lack organizations. Humans can find problems, plan and act. AI still waits for the prompt. If we do not say, we will not act.

In other words, the difference is decreasing. This model is improving the use of reasoning, memory and tools. Some can now run the simulation, learn feedback, and self -correct. They are the ability to think that it has been a few years.

So we are strange at a strange moment. AI is clearly powerful and becomes more useful for the moon. But no one believes we actually fasted AGI.

Where are we going here?

If we keep moving at this speed, the question is simply a way of reaching AGI. Sam Altman said, “You may not even know how AGI looks until we already use it.”

The uncertainty makes this moment interesting and dangerous. We can just follow other alleys and become a groundbreaking paper. As pointed out by Yann Lecun (AI scientist of Meta), the current model is still omitting “basic understanding of how the world works.” Meanwhile, Demis Hassabis (Deepmind CEO) says that we are “getting closer to very powerful things,” but they will need responsibility, cooperation and time.

So how close are we? Nobody can tell clearly. However, if the progress is maintained, AGI may not be a science fiction concept for much longer.

Is it closer than we think?

AGI is not here yet. But something is clearly changing. Today we can do not think about one year, from the entire app coding to the creation of movie scripts and the scientific research guidance.

The Turing Award winner, Yoshua Bengio, warned that the AI model is already showing the “appearance attributes” that researchers have already anticipated. Co -founders of Anthropic wrote about “Sharp Left turns”. The future model is that you can suddenly get unexpected functions during education. And Helen Toner, a member of the Openai Board of Directors, said: “We don’t know how fast things are moving.

Some experts say that we are still in decades. Others think we are surprised from the tip point. Nobody knows. But one thing is clear. If AGI is possible, it is no longer a problem. How are we prepared when it arrives?

If AGI is quietly slipped to change everything or as a tool we use, what we chose now will form how it affects the world.

Post AGI is coming, but how fast? It first appeared in Metaverse Post.