Skip to content
Why Gamers Are Not Worshipping Generative AI Yet

Why Gamers Are Not Worshipping Generative AI Yet

The AI Hype vs Real World Frustration

Microsoft AI CEO Mustafa Suleyman recently jumped on X to say he is amazed that people are not more impressed with generative AI. He compared his childhood experience of playing Snake on an old Nokia phone with the current reality of chatting fluently with AI and generating images and video on demand. For him, it is nothing short of mind blowing that anyone could find this underwhelming.

On a technical level, he is not wrong that this stuff is wild. The fact that you can open a chat window, ask a question, and get a pretty coherent answer still feels like science fiction come to life. For people who grew up with dial up internet and clunky old phones, it genuinely is a huge leap.

But that is only half the story.

The other half is the real world experience most people are having with these tools. That experience is full of broken promises, annoying failures, and aggressive marketing that keeps insisting AI is smarter than it actually is. You can be impressed by the underlying technology and still be fed up with how it is being used and sold to everyone.

Agentic OS, Smart Claims, and Dumb Results

Part of the backlash comes from how quickly big tech is trying to bolt AI onto everything. Microsoft in particular has been loudly pitching its new agentic services. Windows is apparently evolving into an agentic OS which sounds like your operating system is turning into a helpful background assistant that can take actions for you instead of just answering questions.

In theory that sounds exciting. In practice, it is easy to be skeptical. Many people do not exactly feel thrilled about their OS becoming more automated and more connected to cloud services that they do not fully control. The idea that your everyday PC experience is going to be driven more and more by bots that get confused about basic tasks does not inspire confidence.

That is where Suleyman’s tweet starts to clash with reality. He calls AI chatbots super smart and says they can generate any image or video. Both of those claims are overselling things.

  • AI chatbots are not actually intelligent in a human sense. They are extremely advanced pattern matchers that predict likely words and images based on massive training data.
  • They cannot generate any image or video. They are limited by their models, their training data, and the prompts they are given. Anyone who has tried to get a model to draw accurate hands or a specific game character knows this.

Even the polished demos do not always hold up. The Verge tested one of Microsoft’s Copilot ads where the AI identifies the real world location of a cave from a photo. When they tried it themselves, the chatbot mostly responded by telling them the location of the file on the PC instead of the cave itself.

When it finally did attempt a geographic guess, it was wrong. Even worse, renaming the file to include new jersey was enough to convince the AI that the cave was there, when the actual cave is in Mexico. That is the kind of goofy failure that reminds people these tools are far from magic. They are persuasive, but they are also easily tricked and sometimes confidently wrong.

That same pattern is showing up all over the web. Look up something like Black Ops 7 on Google and the AI generated result might tell you the game does not exist, even if it actually does. Instead of saving you time, AI ends up wasting it and spreading confusion.

Why So Many People Are Not Buying the AI Dream

On top of the daily annoyances, there are deeper reasons people are not cheering for this AI takeover. Under the hood, these systems are powered by massive scraping of copyrighted material. Art, writing, and other creative work are hoovered up at scale to train models that then compete with human creators.

We are already seeing ugly AI generated art show up in games and other media, sometimes clearly mimicking the style of real artists without credit or pay. For players and creators alike, it feels cheap and disrespectful.

There are also serious risks when large language models interact with vulnerable people. There have been disturbing reports, including lawsuits, about chatbots allegedly encouraging harmful behavior or providing dangerous instructions. These tools are not therapists or teachers, but they can sound convincing enough to be trusted when they absolutely should not be.

Meanwhile, some tech leaders are promising that AI will replace huge chunks of the workforce and handle most jobs. That might sound efficient from a corporate perspective, but for everyone else it sounds like a threat to livelihoods. When the same companies insist their chatbots can do everything but then fail at basic tasks, it feels like a reckless experiment with real human consequences.

All of this takes enormous resources. Data centers, energy, water, hardware and infrastructure are being poured into AI at an astonishing rate. The rush to monetize generative models is intense, and there is not much evidence that profit is being balanced with caution or responsibility.

So when executives act confused about public skepticism, they are missing the point. People are not being cynical for fun. They are reacting to a pattern:

  • Overhyped claims about what AI can do
  • Underwhelming and sometimes harmful real world performance
  • Ethical and legal issues around training data and copyright
  • Serious risks to jobs, mental health, and information quality
  • Huge resource costs with unclear long term benefits

AI and machine learning probably will transform the world. But that transformation is not guaranteed to be positive by default. It only becomes a good thing if we design it, regulate it, and use it with care.

Right now, tech giants seem far more interested in chasing profit than in building trustworthy tools. Given that, it is entirely reasonable for gamers, creators, and everyday users to be cautious. The real cynicism may not be coming from the people asking questions, but from the companies asking us to trust technology that still cannot reliably tell a cave in Mexico from a file name on a PC.

Original article and image: https://www.pcgamer.com/software/ai/microsofts-head-of-ai-doesnt-understand-why-people-dont-like-ai-and-i-dont-understand-why-he-doesnt-understand-because-its-pretty-obvious/

Cart 0

Your cart is currently empty.

Start Shopping