Your submission is now in Draft mode.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question draft.

The author will submit it for review by Community Moderators when it's ready. Thanks for helping!

Submit Essay

Once you submit your essay, you can no longer edit it.

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

AI given rights in US before 2035

Question

A recent Washington Post article chronicled an interesting development at Google,

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

In addition, 37.2% of US adults agreed with the statement, “I support granting legal rights to sentient robots/AIs" according to a survey by the Sentience Institute in 2021.

Will AI be given legal rights or be protected from abuse anywhere in the United States before 2035?

This question resolves positively if before January 1st 2035, any federal, state, county, or city government within the United States, with a population of over 25,000, recognizes any legal rights for AIs, or regulates the behavior of individual humans with the explicit intent of protecting the welfare of computer programs. These regulations must not simply be ordinary controls over the production, distribution, or use of computers, with no regard for the inherent well-being of any sentient entities other than humans: they must explicitly refer to "rights", "welfare", "cruelty", "abuse" or a word with equivalent meaning, in reference to the AI, in a moral and legal sense, putting AI on a comparable level to some animals, or even humans.

Make a Prediction

Prediction

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.