A classic example of this is the “hallucination” problem, where an LLM confidently fabricates information. In addition, even the latest model that grabbed headlines recently makes mistakes: an economist told me when he asked about the economic situation in Sri Lanka on 21 August, 2025, the response made had been based on the economic condition in 2022, when the country was declared bankrupt.
When it was made aware of the flaw in the response, it quickly scraped the internet for the latest data quickly and rectified the mistake. Of course, the model in question would not make the same mistake again as it has ‘learned’ from the mistake.
The issue of data currency also remains. While some models now have real-time data access, many still rely on training data that is months or even years old, leading to factual errors.
Beyond technical limitations, the financial and environmental costs of AI are becoming a significant point of discussion. The sheer scale required to train and run these models is staggering. They require:
Massive computational power: Training a single state-of-the-art model can cost tens of millions of dollars and require thousands of specialized processors.
Enormous energy consumption: The data centres that house these models use huge amounts of electricity. As models get bigger, their energy footprint grows, raising concerns about sustainability.
These immense costs are forcing many companies to re-evaluate their AI strategies. It’s not a matter of a single company “downsizing its AI sector” but rather a broader strategic pivot. Tech giants like Meta and Google, while still heavily invested in AI, are focusing on more practical, cost-effective applications rather than simply chasing bigger and bigger models.
The AI “boom” isn’t over, but it is maturing. The initial excitement is giving way to a more pragmatic approach. The focus is shifting from “What amazing thing can this model do?” to “How can we make this model useful, safe, and cost-effective?”
The current challenges are forcing the industry to address fundamental issues: making AI more reliable, transparent, and sustainable. As the hype subsides, the real, long-term work of building useful and responsible AI is just beginning.

