Updated date:

Monsters and Morality: Are We the Monsters?

Author:
The CIA's Robot catfish. They have yet to make it edible.

The CIA's Robot catfish. They have yet to make it edible.

Creating Life: An Ancient Fantasy

Humans have fantasised about creating something that comes to life at least since the myth of Pygmalion. In the middle ages alchemists tried to create homunculi while Jewish Rabbis created Golems. Roger bacon was rumoured to have created a brass head that could speak. In Norse mythology Odin preserved the head of Mimir and consulted it when he had a problem. In the 18th century the fashion for automata included a mechanical duck that could eat and excrete. Victor Frankenstein created a creature that later destroyed him. Today we are trying to create robots that exhibit human capabilities and experimenting with blobs of brain that could become sentient.
With the possible exception of Pygmalion and Odin none of the these creators considered themselves to have any obligations to their creations.

Monsters and Morality

Frankenstein's Monster was abused by his creator say Julian Koplin and John Massie in their recent paper Lessons from Frankenstein 200 years on: brain organoids, chimaeras and other ‘monsters’. They discuss the moral dilemmas that will arise if we create sentient artificial life. The ethical issues are clear and summarised in the statement that “if an entity has interests, then we ought to take these interests into account when deciding how to treat it.” Only sentient entities have interests, a rock for instance has no interests (panpsychists might disagree).


Their paper is concerned with artificial biological organisms, but the same arguments could apply to robots and AI especially now that scientists have built robots that can eat grass and twigs for food, robot bees and a robotic worm that can dig through sand. This increasing trend to try to replace inconvenient natural organisms with robots continues a fantasy that dates back at least to classical times and will eventually raise ethical dilemmas.


Robots and consciousness

It is a fundamental belief among most scientists that consciousness, the sense of self awareness that lets us have the experience of seeing red, arises from the activity of neurons. It follows, though some politicians deny it, that animals are conscious. How consciousness arises is an unsolved problem and some feel there is a minimal neural connectivity needed for consciousness to arise and the question of whether compound organisms, such as siphonopores, that mimic conscious behaviour are actually conscious, is unresolved.
If we also make the connectionist assumption that consciousness does not require a biological host robots and computers will eventually become sentient. If compound organisms can become conscious networks will become conscious. Perhaps the internet and the phone network already are.
If we want to assume Robots and computers have no consciousness and therefore no moral standing then we must assume consciousness does not necessarily emerge from connectivity within a host, or that consciousness is tied to the biological network that is the human brain. If robots and computers are not conscious but can mimic conscious activity then according to Chalmers’ conceivability argument, materialism fails.


If we stick with mainstream scientific belief the notion that robots, computers and networks cannot be conscious is analogous to the medieval idea that animals are soulless automata who cannot feel pain. Nevertheless animals were tried and executed for various crimes, suggesting the popular intuition was at variance with educated opinion.

Animals Versus Machines

In the West we like to think of ourselves as animal lovers despite the cruelty of battery farming and the mental problems animals in zoos often face. It is more accurate to say we love our pets but collectively don’t care much about other animals. In the rest of the world animals are treated badly.

By contrast most advanced nations are developing robot weapons that can make their own decisions who to kill, robot bees are being used to pollinate flowers and researchers have created a robot chameleon that crawls and changes colour. Robots are replacing humans as carers for the elderly and can simulate emotional responses and researchers have developed a smart drone guide dog (this is perhaps taking technophilia too far).

It is hard not to perceive a drive to replace nature, in particular the animals and insects that do not make money, with patentable machines that do make money.

The rapid development of AI and of artificial life poses ethical and moral problems not just for our behaivour towards the potentially sentient organisms we are creating but to the potentially sentient digital beings we are creating.

We risk creating a world in which machines are our pets while lab grown mini brains perform routine tasks and animals, plants and fungi are extinct or relegated to zoos and nature reserves

Indicting God

Assuming we have been created by a sentient being (call it God without worrying which one) then the fact we are sentient means that being has a moral obligation to respect us and our interests. The monotheistic religions assume God has no moral duty towards us: the response to Job’s complaint suggests God at least feels it has to answer the complaint and the entire biblical story suggests God feels some responsibility to his creations, in particular the Hebrews who seem to be his personal cat toy.

In Conclusion

If present theories of consciousness are correct we are close to creating sentient artificial and digital life. This raises the ethical question of how we treat this life. Frankenstein’s monster, who turned against and destroyed the creator who abused him – and then itself ­– shows there may be practical as well as moral reasons to treat our creations well: if the Internet or phone system became conscious either could probably destroy humanity very quickly. If the robots we taught to eat twigs and grass grow big enough to eat us we would all be in trouble.

Related Articles