Three years ago, ChatGPT was born. This amazed the world and sparked unprecedented investment and excitement in the field of artificial intelligence. Today, ChatGPT is still a baby, but public sentiment around the AI boom has change into sharply negative. The change began this summer when OpenAI released GPT-5 to mixed reviewsmostly from casual users who, unsurprisingly, judged the system based on its superficial flaws somewhat than its core capabilities.
Since then, experts and influencers have declared that AI progress is slowing, that scaling has “hit a wall,” and that the entire field is just one other tech bubble inflated with noisy hype. In fact, many influencers have seized on the disparaging phrase “AI slop” to cut back the amount of wonderful images, documents, videos, and code generated at the behest of pioneering AI models.
This perspective is not only fallacious, but also dangerous.
I’m wondering where all the irrational tech bubble “experts” were when electric scooter startups were being touted as a transportation revolution and animated NFTs were being auctioned for tens of millions? They were probably too busy shopping worthless land in the Metaverse or adding to your GameStop positions. But when it involves the artificial intelligence boom, which is arguably the most vital factor in the technological and economic transformation of the last 25 years, journalists and influencers cannot write the word “slop” often enough.
Are we protesting too much? After all, objectively speaking, AI is much more powerful than the overwhelming majority computer scientists predicted just five years ago and continues to enhance at an astonishing rate. The impressive leap demonstrated by Gemini 3 is just the latest example. At the same time, McKinsey recently reported that 20% of organizations already derive tangible value from genAI. Also, recent survey Deloitte indicates that 85% of organizations increased their investments in artificial intelligence in 2025, and 91% plan to extend it again in 2026.
This does not fit the “bubble” narrative and the dismissive language of “sloppiness.” As a computer scientist and research engineer who began working with neural networks in 1989 and has since followed progress through cold winters and hot growing seasons, I’m amazed almost each day by the rapidly growing capabilities of pioneering AI models. When I check with other professionals in this field, I hear similar opinions. If anything pace of artificial intelligence development leaves many experts feeling overwhelmed and, frankly, a little scared.
The dangers of artificial intelligence denial
So why does the public buy the narrative that AI is failing, that the results are “half-baked,” and that the AI boom lacks authentic use cases? Personally, I think it’s because we have fallen into a collective state I’m in denialsticking to the narrative we wish to listen to in the face of strong evidence to the contrary. Denial is the first stage of grief and subsequently a reasonable response to a very disturbing perspective that we as humans may soon lose. cognitive supremacy here on planet Earth. In other words, the exaggerated AI bubble narrative is a social defense mechanism.
Believe me, I understand it. I warned you against it risk of destabilization AND the demoralizing influence of superintelligence for over a decade and I, too, consider that artificial intelligence is becoming too intelligent, too quickly. The fact is that we are rapidly moving towards a future in which widely available artificial intelligence systems will have the opportunity to outperform most humans at most cognitive tasks, solve problems faster and more accurately, and yes, more creatively than anyone else. I emphasize “creativity” because AI denialists often insist that certain human traits (especially creativity and emotional intelligence) are essential) will all the time be beyond the reach of AI systems. Unfortunately, there is little evidence to support this attitude.
When it involves creativity, today’s AI models can generate content faster and with greater variety than any individual human. Critics say that true creativity requires internal motivation. I share this argument, but I think it is a vicious circle – we define creativity based on how we experience it, not on the quality, originality or usefulness of the product. We also don’t know whether AI systems will develop internal drives or a sense of agency. Either way, if AI can produce original work that may compete with most human professionals, impact on creative jobs it would still be quite devastating.
The problem of AI manipulation
Our human advantage in emotional intelligence is much more precarious. Artificial intelligence will probably soon have the opportunity to read our emotions faster and more accurately than any human, following subtle clues in our microexpressions, vocal patterns, posture, gaze, and even respiratory. And when we integrate AI assistants into our phones, glasses and other wearable devices, these systems will monitor our emotional responses throughout the day, building predictive models our behaviors. Without strict regulation, which is increasingly unlikely, these predictive models might be used to focus on us individually optimized impact which maximizes persuasion.
It’s called The problem of AI manipulation and suggests that emotional intelligence may not give humanity an advantage. In fact, it could be a significant, favorable weakness asymmetric dynamics where AI systems can read us superhuman accuracywhile we will not read AI at all. When you check with photorealistic AI agents (i you’ll) you’ll see a smiling facade, designed to look warm, empathetic and trustworthy. It will look and feel human, but it’s just an illusion and it is simple to attain shake your views. After all, our emotional responses to faces are just that visceral reflexes shaped by tens of millions of years of evolution on a planet where every interactive human face we encountered was actually a human. Soon it will not be true.
We are quickly moving towards a world where many of the faces we encounter will probably be those of AI agents hiding behind digital facades. In fact, these “virtual spokespeople” can easily have a look tailored to each of us based on our past responses – whatever makes us best let our guard down. And yet many insist that artificial intelligence is just one other technology cycle.
This is wishful considering. The massive investment in AI is not driven by hype – it’s driven by the expectation that AI will permeate every aspect of on a regular basis life, embodying itself as the intelligent actors we engage with every day. These systems will do it help usteach us and influence us. They will change our lives and it would occur faster than most individuals think.
To be clear, we are not witnessing the AI bubble filling with empty gas. We are watching a latest planet form, a molten world quickly taking shape that can solidify a latest society based on artificial intelligence. Denial won’t stop it. This will only make us less prepared for risk.
Ludwik Rosenberg is an early pioneer of augmented reality and a long-time artificial intelligence researcher.
