Українська правда

Researchers warn: AGI is still a long way off, and people's perceptions of AI do not match reality

Researchers warn: AGI is still a long way off, and people's perceptions of AI do not match reality
AGI загальний штучний інтелект
0

Leading artificial intelligence researchers have concluded that the current trajectory of AI development is unlikely to lead to the creation of artificial general intelligence (AGI) — an ambitious goal that the industry is striving for, writes Gizmodo.

This is stated in a large-scale report by the Association for the Advancement of Artificial Intelligence (AAAI) on the future of AI research in 2025. The report, which brings together the conclusions of 24 experts and more than 450 surveyed researchers, addresses issues from technical infrastructure to social aspects of AI implementation.

One of the report’s key messages is that the industry is being driven by hype. In a section on the gap between AI perception and reality, 79% of respondents agreed that public perceptions of AI’s capabilities are unrealistic. Another 90% believe that this gap is hindering further research, with 74% saying that hype is driving research.

"Many people are too eager to believe in exaggerated expectations about AI," said MIT researcher Rodney Brooks, who led the panel, adding that the Gartner Cycle of Expectations is at play here: over-enthusiasm is often followed by disappointment.

Artificial general intelligence (AGI) is a hypothetical level of intelligence for machines that can learn and think like humans. This goal is considered the "holy grail" of AI, with the potential to radically change the way we work in a wide range of industries, from travel planning to healthcare and education.

However, 76% of researchers surveyed agreed that simply scaling up current AI approaches will not lead to AGI.

"Overall, the responses indicate a cautious yet forward-moving approach: AI researchers prioritize safety, ethical governance, benefit-sharing, and gradual innovation, advocating for collaborative and responsible development rather than a race toward AGI," the report says.

Henry Kautz, a computer scientist at the University of Virginia who led the report's section on facts and credibility, pointed out how far the field has come, but also how far it still has to go.

"Five years ago, we could hardly have been having this conversation – AI was limited to applications where a high percentage of errors could be tolerated, such as product recommendation, or where the domain of knowledge was strictly circumscribed, such as classifying scientific images," Kautz notes. "Then, quite suddenly in historic terms, general AI started to work and come to public attention through chatbots such as ChatGPT."

He added that one promising way forward could be to create teams of AI agents that collaborate and check each other's trustworthiness. "That way we can increase trust and reliability," Kautz said.

The report makes clear that the AI community is struggling to address fundamental questions—not just about how to build better systems, but also about how to govern and use them responsibly. While the public debate is often dominated by flashy headlines and corporate announcements, researchers are quietly charting a more cautious course.

The AI boom shows no signs of slowing down, but AAAI’s analysis suggests that for the industry to deliver on its promise, it may need to rethink both its methods and its messaging.

Share:
Посилання скопійовано
Advert:
Advert: