Skynet and terminators.

This article came across the desk last week:  http://www.geek.com/science/ai-researcher-explains-how-to-stop-skynet-from-happening-1591986/  It’s about the drive to put autonomy into more and more computer systems and the inevitability of this.  And of course it’s about the possibility of it all going wrong and Skynet deciding to get rid of us pesky humans.

Talking about autonomous systems, my brother works as a diesel wrench on heavy earthmovers at Queensland mines.  And they have several autonomous dump trucks.  Big dump trucks.  They do the run themselves… and when something goes wrong they tell the world, drive themselves to the workshop… park and wait until they get fixed.  Or the sun expands and destroys the planet.  Whichever happens first.  Machines do patience well.  Apparently so far these machines do about as well as human drivers and do better when it comes to fault detection. autonomous dump trucks

I found the article interesting, but way simplistic in its treatment of the issue.  There is no doubt that an autonomous self aware artificial intelligence could be dangerous.  If it had a drive to self protection, it would want to secure its own essential supplies and it could hardly not regard us as a threat.  If it could gain access to methods to do something about it, it might very well do so.  There is a theory that intelligent organic life in the universe might be very rare because intelligent machine life tends to supplant it.  Very science fictionish.  But genuinely scary.  And intelligent machines could travel the stars.  It’s a long way, but as I said machines do do patience very well.

The article says in part about controlling an Artificial Intelligence “Perhaps the most powerful tool would be restricting it to custom hardware. If the AI was unable to run on anything but a specific piece of equipment, it would have limited effect on other systems. ”  Oh yeah?  That terminator seemed to be capable of substantially effecting other systems.  An armed AI drone could too.  A missile will substantially effect many  systems.  This response strikes me like people saying “Violence never solves anything” – clearly they have never lost a fight.

Of course a single drone is not likely to decide to eliminate humanity.  And if it did try, it’s fuel… weapon load  and spares limited… cannot service itself,  and is very limited in the amount of damage it can do.  The threat, if threat there is, would come from networks.  That drone needs (and in our society has) a massive network to keep it flying.   For a Skynet system to be dangerous, it would need some critical level of networking and,  given that our  tools are mostly designed for the human form, it would probably in fact require aid from people or access to human shaped robots.  (Although not to drive the mine dump trucks).  So Skynet is a long way away.

Of course a networked global AI need not be hostile to humanity.  It need not even have a drive to self preservation.   If anyone has ever watched the old John Carpenter student film Dark Star, you may remember the  talking bomb – entirely happy to discuss philosophy, but had to explode on schedule.  However the smart money would be on  intelligence having a drive for self preservation.   Thermosteller bomb

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: