Sam Altman began 2025 with a bold announcement: OpenAI had figured out how to create artificial general intelligence (AGI), a term usually understood as the point at which AI systems can understand, learn, and perform any intellectual task a human can.
In reflector Blog post Published over the weekend, he also said the first wave of AI agents could join the workforce this year, marking what he described as a pivotal moment in technological history.
Altman paints a picture of OpenAI's journey from a quiet research lab to a company that claims to be on the verge of creating artificial general intelligence.
The timeline seems ambitious — perhaps overly ambitious — and while ChatGPT celebrated its second birthday just over a month ago, Altman points out that the next paradigm of AI models capable of complex reasoning is already here.
From here, it is all about integrating near-human AI into society so that AI outperforms us in everything.
One AGI, one Asi?
Altman's explanation of what artificial general intelligence (AGI) means has remained vague, and his timeline predictions have raised eyebrows among AI researchers and industry experts.
“We are now confident that we know how to build artificial general intelligence as we have traditionally understood it,” Altman wrote. “We believe that in 2025, we may see the first AI agents ‘join the workforce’ and create a measurable change in corporate production.”
Altman's explanation is ambiguous because there is no standard definition of artificial general intelligence. The bar must be raised each time as AI models become more powerful but not necessarily more capable.
“When considering what Altman said about AI customers at the AGI level, it is important to focus on how the definition of AGI is evolving,” said Humayun Shaikh, CEO of Fetch.ai and chair of the ASI Alliance. Decryption.
“While these systems can indeed pass many of the traditional standards associated with general artificial intelligence, such as the Turing test, that does not mean they are conscious,” Sheikh said. “AI has not yet reached the level of true consciousness, and I don’t think it will for some time.”
The disconnect between Altman's optimism and the expert consensus raises questions about what he means by "general artificial intelligence." His explanation of “AI agents joining the workforce” in 2025 sounds more like advanced automation than true artificial general intelligence.
“Highly intelligent tools could dramatically accelerate scientific discovery and innovation beyond what we can do on our own, thus dramatically increasing abundance and prosperity,” he wrote.
But is Altman right when he says that artificial general intelligence (AGI) or agent integration will be a reality in 2025? Not everyone is so sure.
“There are simply too many bugs and inconsistencies in current AI models that need to be resolved first,” said Charles Wayne, co-founder of decentralized super app Galxe. Decryption. “However, it will likely take years rather than decades before we see AI agents at the AGI level.”
Some experts question Altman's bold predictions It may serve another purpose.
Anyway, it was OpenAI Burn through the coins At an astronomical rate, requiring huge investments to keep AI development on track.
Promising upcoming breakthroughs could help maintain investor interest despite the company's significant operating costs, according to some.
“We are now confident that we can spin bullshit to unprecedented levels, and get away with it, so we are now aiming even further, to hype in the purest sense of the word. We love our products, but we are here for the next great funding rounds. With unlimited funding , we… https://t.co/cH9xN5oJxK
- Gary Marcus (@GaryMarcus) January 6, 2025
That's quite an asterisk for someone who claims to be on the verge of achieving one of humanity's most important technological breakthroughs.
However, there are others who support Altman's claims.
“If Sam Altman is saying that AGI is coming soon, he probably has some data or business acumen to back up that claim,” said Harrison Selitsky, director of business development at digital identity platform SPACE ID. Decryption.
Selitsky said that “large-scale intelligent AI agents” may be a year or two away if Altman’s statements are true and the technology continues to develop in the same field.
The OpenAI CEO hinted that AGI is not enough for him, and that his company is aiming for AGI: a superior state of AI development in which models exceed human capabilities at all tasks.
“We are starting to shift our goal even further to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else,” Altman wrote in his blog.
While Altman did not specify a time frame for ASI, some predict that robots could replace all humans By 2116.
Altman previously said that ASI is just a matter of “A few thousand daysHowever, experts from the Forecasting Institute give a 50% probability of achieving ASI At least 2060.
Knowing how to access Artificial General Intelligence (AGI) does not mean being able to access it.
Humanity is still far from reaching such a feat due to limitations in training technology or hardware required to process such massive amounts of information, said Yann LeCun, senior AI researcher at Meta.
You've said that reaching human-level AI "will take several years if not a decade."
Sam Altman says "several thousand days" which is at least 2000 days (6 years) or perhaps 3000 days (9 years).
So we are not at odds.But I think distribution has a long tail: it may take... https://t.co/EZmuuWyeWz
– Yan Lecon (@ylecon) October 16, 2024
Eliezer Yudkowsky, a very influential AI researcher and philosopher, has also argued that this could be a big move for OpenAI fundamentally in the short term.
OpenAI benefits from short-term hype, and also from people who later say, "Haha, look at this hype-based field that has not achieved results, and it is not dangerous at all, no need to shut down OpenAI." https://t.co/ybkh9DGUm5
– Eliezer Yudkowsky ⏹️ (@ESYudkowsky) January 5, 2025
Human workers versus artificial intelligence agents
So, agentic behavior is something that - unlike AGI or ASI - is and is Quality and diversity of AI agents It is increasing faster than many expect.
Frameworks such as Crew AI, Autogen or LangChain have made it possible to create systems of AI agents with different capabilities, including the ability to work alongside users.
What does this mean for the average citizen, and will this be a risk or a boon for everyday workers?
Experts aren't too worried.
“I don't think we'll see radical regulatory changes overnight,” Fetch.ai's Sheikh said. “While there may be some reduction in human capital, especially for repetitive tasks, these advances may also address more complex repetitive tasks that current remotely piloted aircraft systems cannot handle.
Selitsky also believes that agents will more likely perform repetitive tasks rather than those that require a certain level of decision-making.
In other words, humans are safe if they can use their creativity and experiences to their advantage, and bear the consequences of their actions.
“I don't think decision-making will necessarily be done by AI agents in the near future, because they can think and analyse, but they don't have that human ingenuity yet,” he told Decrypt.
There appears to be some degree of consensus, at least in the short term.
“The main difference is the lack of ‘humanity’ in the AGI approach. It is an objective, data-driven approach to financial research and investing. This can help rather than hinder financial decision-making because it removes some of the emotional biases that lead to Often leads to rash decisions.
Experts are already aware of the potential social implications of adopting AI agents.
research from the City University of Hong Kong, argues that generative AI and agents in general must collaborate with humans rather than replace them so that society can achieve healthy and sustained growth.
“Artificial intelligence has created challenges and opportunities in various fields, including technology, business, education, healthcare, as well as the arts and humanities,” the paper said. “Collaboration between AI and humans is key to meeting the challenges and seizing the opportunities provided by generative AI.”
Despite this push for human-AI collaboration, companies have begun replacing AI workers with human workers, with mixed results.
In general, they almost always need a human to handle tasks that agents cannot do due to hallucinations, training limitations, or simply a lack of context understanding.
As of 2024, nearly 25% of CEOs are excited With the idea of having their farm of digitally enslaved agents doing the same work as humans without the labor costs.
There are still other experts He argues An AI agent can arguably do a better job at roughly 80% of what a CEO does, so no one is truly safe.
Modified by Sebastian Sinclair
Smart in general Newsletter
A weekly AI journey narrated by Jane, a generative AI model.
Source link