Abandon All Fear

What nobody else seems to be saying…

Archive for the ‘Terminators’ Category

Terminators 3: Ethics of the Machines

Posted by Lex Fear on August 25, 2007

Read: Terminators, Terminators 2: War of the Machines

Recently a new robot has been developed called the iCAT which has been programmed with a “set of logical rules for emotions”. The idea behind this is to aid interactions and help in reduce computational workload when faced with decision making.

I studied both Artificial Intelligence and Computer Ethics whilst I was at university. That doesn’t make me an expert on either subject, and AI has advanced in leaps and bounds since I studied it, but it does mean I know how AI works. AI has and will continue to accomplish many great things and make our lives easier (however the global economic and ethical impact has not been discussed enough).

Read the rest of this entry »

Advertisements

Posted in Morals & Ethics, Religion & Science, Technology, Terminators | Leave a Comment »

[Terminators 2] War of the Machines

Posted by Lex Fear on June 21, 2007

This Post Is Rated: N for Nothing to hide, nothing to fear. Contains references to giant robots and how wrong they’ve all got it.

In March I blogged about real Terminators in the not to distant future, inspired by Prof. Prabhu Guptara’s blog post on the subject. I found an interesting post in Paleo-Future‘s archives recently: Gigantic Robots to Fight our Battles. Of course this was originally written in 1934, but doesn’t it sound like a feasible future solution to war and terrorism? Imagine, everyone could live in peace whilst nations no longer kill each other but decide who wins with robots.

Of course… this will never happen and the reason why is simple- death. Death has the final say, wins all arguments and wins all wars.

Suppose a future nation, for trade benefits, decides to conquer another smaller nation. Both nations have powerful robots/weapons and agree to go to war in some distant desert. Eventually one nations robots obtain the upper hand and win the war. Neither nation has experienced bloodshed, sacrificed their armies or lost any territory. So what incentive is there for the ‘losing’ nation to give up power and be occupied? No leader or nation is going to willingly allow themselves to be taken over simply because they lost a few robots in some remote location that did not affect them. The only logical step for the ‘winner’ is to then use their remaining robots and weapons against the nation they have won the victory. Even if they succeed to conquer without further violence, it has been through the threat of bloodshed, to induce compliance, not the absence of it.

But there is a greater threat to global freedom and democracy underfoot. Without knowing it, the world is in a new subterranean arms race. It’s the race for robots in which the Asian continent seems to be leading. It’s the robot arms race, specifically for robots capable of successfully identifying and exterminating (as well as rescuing and serving) human targets. Whoever wins this arms race will probably end up deciding the fate of the planet. The ability to deploy robots into any hostile civilian or combat situation will give the nation with the greatest AI, the keys to the world kingdom. They will either end up dictating the course of other nations for their own benefit, or we will enter another cold war where fear of our enemies robots both keep our fingers on the trigger and prevent us deploying our own AI at the same time.

The only viable way that an AI cold war could be prevented is in much the same way as the last one was ended- by diplomacy. It would require one or more nations to to pledge against using anti-personnel robots on the battlefield (much the same way chemical weapons are forbidden now).

Finally this post is not meant to represent a pro-war stance. I am and will remain anti-war. However as a realist, it seems that as long as their are presidents, dictators, borders and non-renewable resources there will be wars.

Read: Terminators

Posted in Absolute Power, Morals & Ethics, Technology, Terminators | Tagged: , , , , , , , | Leave a Comment »

[Skynet Is Online] Terminators

Posted by Lex Fear on March 12, 2007

Prof. Prabhu Guptara blogs on the lack of discussion about the dangers of creating murderous machines.

“In any case, if my dear pet Jupiter is hacked into by my worst enemy and kills my best friend visiting me in my house, will it be I who am up for murder for not having my robot in my control…?” Renaissance: Insights for Action in Today’s World

I agree with Prof. Guptara that there desperately needs to be a debate on ethics, particularly in regards to anti-personnel robots being created to destroy and maim human life. Prof. Guptara quite rightly questions whether a killing machine would understand and “respond to pleas for mercy?”

Some other questions that could be asked:

  • How would a battle droid distinguish between innocents and combatants?
  • How would a battle droid recognise surrender?
  • What is the potential devastation that could be caused if a an evil genius managed to hack a small army of battle droids and deploy them to a major civilian area, or hold a government to hostage?
  • How would a battle droid handle a child soldier, how would it distinguish a toy-gun in a civilian situation?
  • What is the risk to human life if a bug got in the system?

20 years from now, if RoboJeevesTM develops a fault, your food is undercooked and your shirts are all crinkled, but nobody dies. If RoboMercinary develops a fault, suddenly Saturday afternoon at Bluewater turns into sniper alley!

Technorati Tags: , ,

Posted in Morals & Ethics, Technology, Terminators | Tagged: , , , , | Leave a Comment »