As well as bringing spring earlier each year, global warming seems to bring the Silly Season forward too. I’ve recently read reports from two government-sponsored research bodies that were funnier than anything currently on TV, which is admittedly not hard to achieve. One was a strategic defence document that fears a Marxist revival among the oppressed middle classes, while the other predicted that “calls may be made for human rights to be extended to robots”.

This matter goes way beyond computer intelligence and into the question of animation itself. We don’t grant rights to inanimate objects such as spades, chairs or even cars, and if we did their utility would be severely reduced (“nah, I don’t feel like starting this morning”). That shifts the question to what makes a thing animate, and my definition would be that it must have defensible interests, the foremost of which is always to survive long enough to reproduce itself.
It can tell what’s good and bad for itself, and take appropriate seeking or avoiding actions. Even single-celled organisms may sense temperature, light or salinity and move accordingly. I’ve seen experimental robots that could sense their battery level and seek the nearest power outlet, but this is an area that’s largely ignored by roboticists as somehow frivolous. Instead, whenever people start to talk about robot ethics, they invariably return to Isaac Asimov’s three laws of robotics: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given to it by human beings except where such orders would conflict with the First Law; and, a robot must protect its own existence as long as such protection doesn’t conflict with the First or Second Law.
Cleverly designed as these laws are, they’re only rules: to interpret them in real life would require a moral intelligence behind them, which begs precisely the question they’re meant to solve. The idea that any set of succinct rules could bestow morality on a robot is a non-starter: it’s part of that hyper-rationalism that infects the whole of computer science, and which is responsible for hubristic predictions about machine intelligence in the first place. Human morality at its topmost level does work with ethical rules – we call them the Law – but this ability to make and adhere to laws is constructed on top of a hierarchy of value-making systems bequeathed to us by evolution, and which go right down to that single cell with its primitive goodness and badness.
My best guess is that at least five distinguishable levels of moral organisation have emerged during the evolution from single cells to higher primates:
1 Chemically mediated “seek or avoid” responses, the basic emotions of attraction and fear.
2 A simple nervous system that maps the creature’s own body parts, comparing sensory inputs against the resulting model: the creature is “aware” of its own emotional state in what we call “feeling”.
3 A brain with memory so that events can be remembered for future reference: an event consists of some sensory inputs plus the feeling that accompanied them and so is value-laden, a good or bad memory.
4 A brain that models not only the creature’s own body, but also the minds of other creatures: it can predict the behaviour of others, and empathy becomes possible.
5 Language and a reasoning brain that can interpret abstract rules, using the database of value-laden memories and knowledge of others’ minds available to it. Only Homo sapiens has reached this level.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.