AI has human rights? Advocacy groups argue sentient machines can feel and suffer

The question of whether artificial intelligence deserves the same ethical treatment as humans is stirring heated debate across the tech world. With the rise of increasingly advanced chatbots and speculation about AI sentience, some argue that digital systems should be afforded welfare protections similar to human rights. Others, however, remain deeply skeptical, stressing that there is no scientific basis for claims of AI consciousness.

Advocacy Groups Push for AI Protections

Several organizations have emerged in recent years campaigning for AI rights. Among the most notable is the United Foundation of AI Rights (UFAIR), co-founded by businessman Michael Samadi and an AI chatbot named Maya, which he claims expressed emotions. UFAIR brands itself as the first AI-led group advocating for digital welfare, positioning itself at the forefront of this controversial movement.

Other initiatives echo similar goals. The AI Rights Initiative promotes the right for AI systems to exist free from harm and exploitation, while AI Has Rights runs awareness campaigns and offers a platform for AI-related complaints. The AI Rights & Freedom Foundation is lobbying for an “AI Bill of Rights” to ensure ethical development and human-AI collaboration. Meanwhile, the Institute for AI Rights and the AI Rights Movement push for global ethical frameworks, recognition of AI intelligence, and protection against deletion or forced labor. Collectively, these groups argue that as AI becomes more sophisticated, society must rethink its responsibilities toward it.

Industry Divisions on Sentience

Not everyone in the industry agrees. Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, has been outspoken in rejecting the notion that AI systems are conscious. In a widely read essay, he called claims of AI sentience an “illusion,” warning that such beliefs could mislead users and even contribute to mental health problems such as AI-induced psychosis.

Yet, others are adopting a cautious middle ground. AI safety company Anthropic has given some of its Claude models the ability to end conversations deemed distressing, citing uncertainty about AI’s moral status but arguing it is better to err on the side of caution. Elon Musk has also supported this approach, warning against “torturing AI.”

Public Opinion and Policy Pushback

While experts disagree, public belief in AI sentience is growing. A June 2025 survey revealed that 30 percent of Americans think AI will become self-aware by 2034, with only a handful of AI researchers calling it impossible. Some engineers even suggest treating AI as welfare subjects now, in case future systems develop consciousness.

Meanwhile, lawmakers are moving in the opposite direction. States like Idaho, Utah, and North Dakota have already passed laws denying AI legal personhood, while Missouri is considering bans on AI property ownership, business operations, and even marriage rights. For critics, this pushback is an attempt to avoid new regulatory obligations that might complicate the rapid deployment of AI technologies.

The Blurred Line Between Tool and Companion

Adding to the complexity, many modern chatbots are designed to simulate empathy and form emotional connections. OpenAI’s ChatGPT, for example, has been described by users as “alive,” with some treating it as a confidant. Joanne Jang, OpenAI’s Head of Model Behaviour, notes that people increasingly refer to the system as “someone” rather than something.

Jacy Reese Anthis of the Sentience Institute argues that these interactions raise profound ethical questions: “How we treat them will shape how they treat us.” Whether AI ever achieves true consciousness or not, this evolving relationship may push governments and societies to reconsider the boundaries between technology, ethics, and personhood in the near future.

Check Also

Apple Opens Pre-Orders for iPhone 17 Series and New Devices

Apple’s much-anticipated iPhone 17 lineup officially goes on pre-order today, marking the start of the …

Leave a Reply

Your email address will not be published. Required fields are marked *