Abandon All Fear

What nobody else seems to be saying…

Terminators 3: Ethics of the Machines

Posted by Lex Fear on August 25, 2007

Read: Terminators, Terminators 2: War of the Machines

Recently a new robot has been developed called the iCAT which has been programmed with a “set of logical rules for emotions”. The idea behind this is to aid interactions and help in reduce computational workload when faced with decision making.

I studied both Artificial Intelligence and Computer Ethics whilst I was at university. That doesn’t make me an expert on either subject, and AI has advanced in leaps and bounds since I studied it, but it does mean I know how AI works. AI has and will continue to accomplish many great things and make our lives easier (however the global economic and ethical impact has not been discussed enough).

During our Computers and Ethics 4001 class debates, most of my class considered that the emergence of the digital and information age, particularly the internet and virtual reality required a whole new set of ethics (that’s certainly what our lecturer believed). I tended to lean towards what Moor terms as “routine ethics”. I believe that moral and ethical values in the real world can be applied to computing (though not always in the same construct).

One of the other things that was occasionally discussed was Asimov’s Three Laws of Robotics. I found this laughable, not because he was a fiction writer, but because the rules are too ambiguous to be applied and secondly, either we have to design robots that are independently capable of killing humans (and recognise their actions as killing) or we have to design robots that were capable of making mistakes. Even modifications of the three laws dissolve into multiple logic arguments.

Simply put, AI works on the basis of learning. Specifically pattern recognition, fuzzy logic and/or evolutionary computation. The computer makes it’s decisions based on previous experience and weight of probability, though there is a lot of terminology with many methods and ways that this is applied.

If robots were to emulate human beings, we would have to build something like a virus that would add a degree of randomness to whichever method was applied. Fruit machines, for example, are seemingly programmed to produce random results. But they are not really random, they are simply a huge, long list of numbers following a complex calculation and if you sit there long enough (maybe a few days) you could work out the sequence. In the same way a random sequence could be programmed into the AI’s inference engine, to every now and then override the logical path and choose the wrong one. For example every now and then a computer working on a repetitive task carrying equipment would drop something, it could even ‘decide’ to slow down, have a break or arse around, just like humans do.

For me, the single thing that separates us from robots is not human emotion as many Hollywood films depict. It is the ability to fail, and sometimes fail catastrophically. Furthermore the ability for human beings to be so wrong and yet convinced they are right, to make simple mistakes because they missed on element in their plan.

In fact, humans spend a large amount of their time trying to avoid emotion in their decision making because emotions tend to lead us to make bad decisions. Has anyone quite thought how giving a robot emotions will make it process decisions better? Imagine a search and rescue robot that may avoid areas too dangerous because of ‘fear’ of sustaining damage, or a robot so plagued by ‘guilt’ for a past error, it won’t attempt the task again.

Eventually, I can envision a world where robots are programmed to avoid bringing harm to humans, they may even find a practical use for emulating our emotions. But humans are human because we are not explicitly programmed with a set of laws. An emotional robot may be built for a child as a play-mate, but will any parent tolerate a robot that becomes selfish (develops a bug) and steals the child’s toys, or even bullies it’s human play-mate? What if the emotional robot decides it doesn’t want to play today, what of it’s usefulness and destiny then?

It is our mistakes, our capacity for error (or sin as a Christian might say), our freedom to willfully commit evil, or ability to go against our nature and do good, not our emotions… that define us as human.

Leave a comment