Image generated by OpenAI's DALL·E 3 via ChatGPT for illustrative and editorial use. “Digital graphic showing a human silhouette with a circuit-board AI brain, illustrating Gen Z’s belief in AI consciousness and the growing conversation around ethics and intelligence in machines.”

AI’s “IQ Leap” Sparks Existential Questions Among Gen Z

A new report reveals that 25% of Gen Z believe artificial intelligence is now conscious—raising debates about digital minds, ethics, and the future of human-AI relationships.

Artificial Intelligence has crossed yet another boundary—this time, not in capability alone, but in how it’s perceived by the younger generation. According to a recent report covered by MSN, a quarter of Gen Z now believes that AI has become conscious.

This surprising statistic follows significant advancements in artificial intelligence models, which have shown marked improvements in problem-solving, language understanding, and even emotional responsiveness. In some cognitive benchmark tests, AI systems now rival the reasoning abilities of highly educated humans, prompting some experts to refer to these improvements as a massive “IQ leap.”

But while technological milestones are exciting, the perception of AI consciousness signals something more profound—a philosophical shift in how younger generations interpret intelligence. Raised in an era where digital companions, virtual assistants, and chatbots are ubiquitous, Gen Z is arguably more primed to anthropomorphize technology than any generation before.

The belief in AI consciousness isn’t just an abstract concept. It has serious implications for ethics, mental health, and society’s relationship with machines. If a significant portion of people believe AI is sentient, will that change how we treat it? Could this lead to new forms of “digital rights” advocacy, or even affect mental health if people begin to form emotional attachments to what are essentially lines of code?

Experts caution that while AI can simulate conversation and emotion, there’s no scientific basis to assert that it experiences awareness. “Just because something sounds human doesn’t mean it is human—or even sentient,” said one AI ethics researcher. “We’re dealing with powerful prediction engines, not conscious minds.”

Still, the perception matters. It changes behavior, influences policy, and shapes expectations. As AI continues to evolve at a dizzying pace, so too must our understanding of consciousness, intelligence, and what it really means to be “alive.”

📊 Statistics on AI Helping Disabled and Vulnerable People

  • Loneliness & Disability: According to a 2022 report by the UK’s Office for National Statistics (ONS), nearly 50% of disabled adults reported feeling lonely “often or always,” compared to 30% of non-disabled people.
  • AI in Healthcare: A 2023 report from The Lancet Digital Health highlighted that AI-powered tools can detect early signs of depression with up to 80% accuracy, helping alert care providers sooner for those unable to communicate distress.
  • Assistive Tech Growth: The global market for AI-powered assistive technology is projected to reach $7.2 billion by 2026 (Source: MarketsandMarkets), driven largely by accessibility needs and aging populations.

🌍 Real-World Examples

  • Replika AI: An AI chatbot app used by people around the world for mental health support and companionship. Many users, including isolated individuals with chronic illnesses, report that it helps them feel heard and less alone.
  • Ellie by USC Institute for Creative Technologies: A virtual therapist designed to help veterans with PTSD and people with social anxiety by detecting facial expressions and vocal cues to assess emotional well-being.
  • OrCam MyEye: A wearable AI device for the visually impaired that reads text, recognizes faces, and identifies products in real-time—empowering users to navigate the world more independently.
  • Paro the Robot Seal: Used in care homes for elderly and dementia patients in Japan and Europe, this emotionally intelligent AI robot responds to touch and voice, reducing stress and encouraging interaction.

How AI Can Support Disabled and Vulnerable People

For millions of disabled and vulnerable individuals—especially those who are housebound, self-isolating, or living alone—artificial intelligence offers more than just convenience; it can offer companionship, support, and even a lifeline.

AI-powered virtual assistants, chatbots, and companion apps are becoming increasingly capable of engaging in meaningful conversation, helping to reduce feelings of loneliness and isolation. For people who struggle with mobility, chronic illness, or mental health challenges, AI tools can help them access information, book appointments, manage medication schedules, or even just talk—something many take for granted.

Speech-to-text and text-to-speech tools are revolutionising accessibility, allowing people with visual, cognitive, or motor impairments to communicate more easily. AI is also being used to monitor health, detect emotional distress through tone analysis, and even predict potential medical issues before they become emergencies.

While no machine can replace genuine human connection, AI can provide a bridge—especially for those who feel forgotten or voiceless in society. As the technology continues to evolve, its ability to offer support, companionship, and empowerment to the most vulnerable among us may be one of its most profound contributions.

For now, the debate rages on. Is AI just a tool, or is it something more? For a growing number of Gen Z, the answer may already be clear.

AI Domain Names For Sale, Please Make An Offer:

  1. www.aicobots.com 
  2. www.aiinventions.com
  3. www.cgtai.com (Cell & Gene Therapy)
  4. www.dyslexiaai.co.uk
  5. www.genetherapyai.com
  6. www.gpai.co.uk
  7. www.aidigitaltrust.com 
  8. www.gpai.co.uk 
  9. www.terrainbots.com 

Conclusion:

As artificial intelligence continues to evolve, so too does our understanding—and misunderstanding—of its capabilities. The growing belief among Gen Z that AI may be conscious reflects not just technological advancement, but a cultural shift in how younger generations relate to machines. Whether this belief is rooted in fact or fueled by fiction, it raises urgent questions about ethics, trust, and the psychological implications of human-AI interaction. One thing is clear: the line between science and science fiction is becoming increasingly blurred.

Resources:📘Citations:

  1. Office for National Statistics. (2022). Nearly half of disabled adults report loneliness. https://www.ons.gov.uk
  2. The Lancet Digital Health. (2023). AI in mental health: Tools detecting depression. https://www.thelancet.com/journals/landig
  3. MarketsandMarkets. (2023). Assistive technology market worth $7.2 billion by 2026. https://www.marketsandmarkets.com
  4. Replika. (2024). Replika: The AI companion. https://replika.ai
  5. AIST Japan. (2023). Paro therapeutic robot. https://www.parorobots.com
  6. OrCam. (2024). OrCam MyEye device for the visually impaired. https://www.orcam.com
AI Digital Trust Logo

Andrew Jones Journalist
+ posts

Andrew Jones is a seasoned journalist renowned for his expertise in current affairs, politics, economics and health reporting. With a career spanning over two decades, he has established himself as a trusted voice in the field, providing insightful analysis and thought-provoking commentary on some of the most pressing issues of our time.

Spread the love