machine learning

Does Conscious AI Deserve Rights?

13 July, 2020

Alan Turing, the father of modern computing, understood that at some point machines could become thinking machines as opposed to just calculating machines. He devised a very simple test to determine whether a machine can exhibit intelligent behaviour indistinguishable from that of a human. If a human evaluator can’t tell the machine from the human in a natural language conversations, the machine is said to have passed the test. Thus far the test has not been passed. But one day, it very probably will. What then?

Does AI—and, more specifically, conscious AI—deserve moral rights?

This is profoundly disturbing because it goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don’t see why they would not. And so, this moral consideration of how to treat artificially intelligent robots will arise in the future and it’s a problem which philosophers and moral philosophers are already talking about.

Richard Dawkins

In this video thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.

For those working in this field, what are the ethical considerations which come into play? What philosophies and moral frameworks should we draw from? Given the rapidity with which advances in technology are made, and the tragedy of slavery throughout human history, philosophers and technologists must work quickly to answer these questions ahead of technological development to avoid humanity creating a slave class of conscious beings.

Joanna Bryson suggests one potential safeguard against that is regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.

Big Think, 8 July 2020. Watch the video here.

Photo by Michael Dziedzic on Unsplash