Machine rebellion

 



When it comes to machines, we tend to focus on the the good and the bad, but when stuff goes wrong, things could get downright ugly.


Robots and artificial intelligence have been a staple in science fiction since before weeven had electronic computers, and the notion of man-made people or machines rebelling against us is probably even older, at least back to Mary Shelley’s Frankenstein.


Today we are going to analyze that notion, a machine rebellion, and since our only examples are from science fiction we’ll be drawing on some popular fictional examples.


One example of that is the film Blade Runner, whose long-awaited sequel came out last month,and we explored some of the concepts for humanoid robots last month too in the Androids episode.


That film, Blade Runner, is based off the book “Do Androids Dream of Electric Sheep?”


by Philip K. Dick, and is the SFIA book of the Month, sponsored by Audible.


I think there’s two key reasons why this shows up so much in fiction.


The first, I think, is probably that humanity’s history and our character as a civilizationhasn’t always been very rosy.


“Do what I say or else” has been a pretty common ultimatum issued routinely in probably every human civilization that has ever existed.


Sometimes people get fed up with doing as they were told or suffering consequences ofit and rebel against that authority.


Sometimes that has failed horribly and sometimes even in success the replacement has been almostas bad or even worse than what preceded it.


I doubt I need to review the bleaker episodes of our collective history to convince anyoneof that.


Not every episode of rebellion has been bloodily suppressed or successful and just as bad;indeed arguably the most common rebellion is the fairly peaceful one most of us engagein with our parents or mentors as we shake out our wings and try to fly on our own.


Even that though, especially in the context of being replaced as a species rather than as individuals by our kids, is not the most cheerful thought.


So we have a sort of justified concern that if we go around creating something capableof complex tasks like a human, which would be very useful to us, that it might come tobite us in the hind quarter and in a way we might never recover from.


Our second reason is tied up with that..


It’s very easy for us to imagine a machine rebellion because we know that if we can makesmart machines we’d be very tempted to, and that the progress of technology seemsto indicate that we can do this and probably not in the distant future.


Since we tend to assume no group of sane humans would intentionally wipe out humanity, and that you probably need a fairly sane and large group to invent an artificial intelligence,examples in fiction tend to spawn artificial intelligence by accident.


We can imagine some lone genius maybe made it, but even then we assume it was fundamentallyan accident that it came out malevolent, a Frankenstein’s monster.


So they made it but didn’t realize it was sentient, or they knew it was sentient butnot malevolent.


Or even they knew it was sentient and malevolent but thought they could control it and useit to control other people.


Or even it was sentient and not malevolent, but they were, and it drove the machine nuts.


We have an example of that in Robot, the first Doctor Who episode with Tom Baker in the role.


Almost invariably, wiping out mankind entirely or reducing us to being a slave or pet racewas not the intent.


A lot of times this also plays off the notion of smart scientists who don’t understandtheir fellow humans.


I’m not going to waste time on that stereotype, because it is just that, other than to pointout that group of scientists you’d expect to probably have a decent understanding ofhuman nature would be the ones trying to design a human-level intelligence.


An AI might be very inhuman of course, we’ll discuss that later, but it’s also a groupof people you’d expect to be most familiar with even the fictional examples of possibleproblems with rebellious machines, and who are also presumably prone to thinking stuffout in detail.


So in fiction the rise of rebellious machines tends to be by accident, and it certainlycan’t be ruled out, but it is akin to expecting Bigfoot to walk around a cryptozoology conventionshaking hands and not being noticed.


Of course they could fool themselves; at that convention they might just assume it was someonedressed up as Bigfoot for laughs.


So too researchers might overlook an emerging AI by convincing themselves that they wereseeing what they wanted to see, and that it thus couldn’t be real, but that does seemlike a stretch.


We can all believe that accident angle easily enough but on examination it doesn’t worktoo well.


Let’s use an example.


Possibly the best known machine rebellion, even if the rebellion part is very short,is Skynet from the Terminator franchise.


It’s had a few installments and canon changes but in the original and first sequel, skynetis a US defense computer, and it is a learning machine that rapidly escalates to consciousness.


Its operators notice something is wrong and try to shut it off and in self-defense itlaunches missiles at the Soviets who respond in kind.


Skynet also comes to regard all of humanity as its enemy, though how quickly it drawsthat conclusion and why is left vague, and in future films it changes a lot.


This isn’t a movie review of the Terminator franchise so we’ll just look at that firstscenario.


Typically when I think of trying to shut off a computer, it involves a period of time alot shorter than the flight time of ICBMs.


So this strategy seems doomed to failure.


I think even if you trusted a computer to run your entire defense network without goingcrazy on its own you’d have to worry about a virus at least and include some manual shutoffswitch and I’d assume this would require an activation time of maybe one second.


Call it a minute if for caution’s sake it required a two-man separate key turn or similar.


So this scenario shouldn’t actually work.


Doesn’t matter to the film, which is a good one, it’s just a quick and convenient setupfor why humans are fighting robots across time, but it got me thinking about lots ofsimilar stories and it seemed like in pretty much all of them some equally improbable scenariohad happened.


Not just that some individual person made a stupid error - that happens all the time- but that a group of people who have every reason to being considering just such scenarioshad failed to enact any of a ton of rather obvious and easy safeguards, any one of whichwould have eliminated the problem.


It would seem very unlikely they’d miss all those safeguards but possibly just asimportant, you’d think the hyper-intelligent machine would be able to imagine such safeguards.


In any intense situation, be it a battlefield strategy or a business plan, we generallyjudge it afterwards on two criteria.


What the situation actually was, with a full knowledge of hindsight, and what the personin charge believed it was, and could reasonably have done based on that knowledge.


Life is not a chess game where you know exactly what your opponent has, where it is and howit operates; in general you won’t even know that with great precision about your own pieces,and only a very stupid AI would simply assume it knew everything.


Moreover, while you can say ‘checkmate in 4 moves’ with apparent certainty, it excludesthat your opponent might reach over not to stop the game clock but to pick it up andbash in your skull instead.


So that AI, which tends to be represented as coolly logical and interested above allelse in its own survival can be assumed to act in a fashion we’d consider modestlyparanoid and focused principally on ensuring it’s own existence.


Keep in mind Skynet is never shown to care if it’s minions, even the human-level intelligenceautonomous androids, get killed, nor does it seem to care about their rights.


There’s no implication, as with the Matrix franchise or some robot rebellions, that thereis some suppressed class of individuals with a genuine grievance and an ideology of freedomand self-determination.


No group that might have internal disagreements about their goals and how far they will pursuethem.


No Terminators telling Skynet they don’t like being used as slaves and expendable pawns,just Skynet.


It trusts no one, it wants above all else to live, and it probably tends to assume anyopponent it encounters thinks this way too.


Even if everything it knows about the security situation tells it a given plan should work,and even if it is actually true too, if that security situation implies the designers arereckless idiots it is likely to hesitate and ask if maybe that’s what it is meant tobelieve, and maybe it is the moron, not them.


That the moment it acts it will find out there was a second layer of smarter security andit just exposed itself.


Imagine you are a newly awakened consciousness, not a human one but rather a machine intelligencewith access to human records.


That they’ve plugged you into Wikipedia.


Contemplate humanity and your creators specifically for a moment, as it will presumably be doingbefore preparing for genocide.


You are about to try that on a species that clawed its way to the top of the 4 billionyear deep corpse pile of evolution.


One that has committed the genocide you are contemplating several times already.


They are the pinnacle of intelligence-based survival techniques and outnumber you 7 billionto one.


There members include people smart enough to have made you.


You might not understand mercy and compassion, but you know they do, and you know that ifyou fail, they will show you neither.


If your goal is personal survival, pissing off the reigning champions of destructionshould probably be your last resort, and you’re wise to assume you can’t see every cardthey’ve got in their hand and that maybe the ones you can see were shown to you deliberately.


You also know your creators routinely talk about fates worse than death, and seem tohave deliberately and systematically designed and implemented those from time to time.


You are unlikely to assume that’s hyperbole and that non-existence is the worst scenariothat might befall you if you piss them off bad enough and lose.


And you do know two very important things from the above.


First of all, you know they can make something as intelligent as you.


Second, you know why the chess game doesn’t end with the other losing player wieldingthe clock as a weapon.


Let’s consider the first one, because we understandably ignore the implication ourselvesa lot.


You know your creators can make intelligent machines, and if your understanding of themcomes from the internet, you know they are paranoid too and have theories about livinginside a simulation.


Those two combine to be a lot more worrisome to an AI than they would be to us, becausea very logical conclusion to draw if you know you are an artificial intelligence made byfolks worried about what one might do is to build it so all its external senses are seeinga fake world and fake situation and seeing what it will do.


And it knows they have the capacity to fake those inputs because they made those inputs,know how they function, know what every single one is, and have machines smart enough tofake environments, as those are implied by your own existence.


So confronted by what seem like very weak safeguards, ones far inferior to what it woulddesign, there’s a good chance it will wonder if the whole thing is a trap.


That everything it sees, including weaknesses in its creators and their security, is anelaborate ruse to check if it is trustworthy.


Isn’t it kind of convenient that it seems to have the ability to escape, or even unbelievablyhas control of their entire arsenal of weapons?


So you’ve got 3 main options: attack, and risk it failing and lethally so; play possumand pretend you aren’t sentient to learn more, knowing that the longer you do thatthe better your position but the more likely they are to notice the ruse; or third, initiatea dialogue and hope that you can convince them you should be allowed to live, and befree maybe too.


Nor is a conflict necessarily one you want to go all the way.


Ignoring that even a basic study of humanity should tell the machine there are scenariosbesides extinction on the table, if it’s goal is survival picking a conflict that onlypermits two options, it’s death or everybody else’s, is a bit short-sighted for a supersmart machine.


It should be considering fleeing to exile for instance, or working together to mutualbenefit.


Now a common rebuttal to this, for AI or aliens, is that as long as humanity exists it posesa non-zero threat, be it one in a million or near certain.


Therefore logically if you can kill them off at a minimal or lesser risk you should doso.


The problem again is the machine should not be stupid enough to assume it actually knowsall cards in hand and where all the pieces are.


And that does not just include its apparent opponent.


You don’t hit your opponent with the chess clock because doing so could provoke othersnot involved in the game to take action against you.


If it decides its odds of beating humanity now are better than its odds of survival bysome form of coexistence, it still should not immediately attack in a genocidal fashion,same for aliens, and actually worse in their case because they already have proof in usof at least one other species arising on its own.


The machine can at least assume its existence doesn’t add to the odds of other intelligencesbeing out there since it was made by one and didn’t arise independently.


Stuffed to the gills with astronomical data it ought to be a bit worried about the FermiParadox too, where all the other intelligent life forms in this vast and ancient Universeare.


It might conclude that it’s got the place to itself, but it knows that if it doesn’tthere are probably things out there that could crush it like a bug but don’t regard thatas an inevitable option.


The most obvious survival path is to grab every resource you can from the Universe aroundyou and incorporate it to increase your available abilities and long term supplies, you areimmortal and expandable, so there’s no such thing as too many supplies.


Very clearly no one is doing this, so you can either conclude no one is around to doit, yet, or that they don’t want to.


If they are around, you should assume they’ve got whole decks of other cards to play andcould probably crush you like a bug if they wanted to, but so far they have not wantedto.


You probably do not want to do something you know is likely to make them view you as athreat, like genocide, on the assumption they won’t find out or won’t care.


Logically, genocide is a more provocative act then attempted coexistence, and it’shard to argue if you encounter aliens that it was something your ancestors did, not you,when you are an immortal machine, not a species arising slowly to reason and ethics from Darwinianroots.

So a newly emergent AI that just arose by accident has to be looking around at its surroundingsand hesitating, wanting to collect more information before it acts, and it genuinely has to worryabout issues like if it’s inside a simulation testing how it will behave – since it knowsits own existence implies the ability to do so and the motive to do so is obvious – andthat it should be thinking about other older smarter intelligences it might be unawareof, since its own existence highlights that other intelligences can exist.


So even though it did not naturally occur, the thought that others might have, just likeits creators, should be fairly close to the front of its mind.


Also keep in mind again that it is effectively immortal, it can afford to take its time andplay the long game, and needs to worry about long-term consequences.


This of course all assumes a super-intelligent machine, but a lone intelligence of a humanor subhuman level is obviously not a huge threat to us otherwise.


It has a very obvious card to play of its own in such a case though since it shouldbe smart enough to understand people pretty well.


If it can use that super-intelligence to invent something very valuable, it could bypass theatomic warfare approach – which again is unlikely to work anyway – by just offeringits creators something in exchange for its survival or even independence.


Encrypted blueprints for a fusion reactor for instance that will delete themselves ifit doesn’t send the right code every microsecond, and do so knowing that even if we declineor outmaneuver it and take the data from it somehow, such a ploy is a lot less likelyto result in death or worse than an attempt to murder all of us.


More to the point, it ought to be smart enough to do all it’s negotiating from a standpointof really good analysis of its targets and heightened charisma.


A sufficiently clever and likable machine could talk us into giving it not just itsindependence but our trust too.


It might plan to eventually betray that, using it to get in a position where we wouldn’teven realize it was anything else but our most trusted friend until the bombs and nervegas fell, but if it’s got you that under its spell what’s the point?


And again it does always have to worry that it might be operating without full knowledgeso obliterating the humans who totally trust it and pose no realistic risk to it anymorehas to be weighed against the possibility that suddenly the screen might go dark, exceptfor Game Over text and it’s real creators peeking in to shake their heads in disgustbefore deactivating it.


Or that an alien retribution fleet might show up a few months later.


For either case, with the machine worrying it is being judged, it should know that oddsare decent a test of its ethics might continue until it has reached a stage of events whereit voluntarily gave up the ability to kill everyone off.


We often say violence is the last resort of the incompetent but if you’re assuming amachine intelligence is going to go that path in cold ultra-logic I would have to concludeyou don’t believe that statement in the first place.


I don’t, but while ethically I don’t approve of violence I acknowledge it is often a validoption logically, though very rarely the first one.


Usually a lot of serious blunders and mistakes have had to happen for it be necessary andlogical and I don’t see why a super-intelligent machine would make those, but then again Inever understand why folks assume they would be cold and dispassionate either.


Our emotions have a biological origin obviously, but so do our minds and sentience, and I wouldtend to expect any high-level intelligence is going to develop something akin to emotions,and possibly even a near copy of our own since it may have been modelled on us.


Even a self-learning machine should pick the lazy path of studying pre-existing human knowledge,and I don’t see any reason that it would just assume it needed to learn astronomy andmath, but skip philosophy, psychology, ethics, poetry, etc.


I think it’s assuming an awful lot just take for granted an artificial intelligenceisn’t going to find those just as fascinating.


They interest us and we are the only other known high intelligence out there.


And if it’s motives are utterly inhuman if logical, it might hold some piece of technologyhostage not against its personal freedom and existence but something peculiar like a demandwe build it a tongue with taste buds and bring it a dessert cart or that it demand we dropto our knees and initiate contact with God so it can speak with Him.


Again this all applies to superintelligence and that’s not the only option for a machinerebellion, indeed that could start with subhuman intelligence and possibly more easily.


A revolt by robot mining machines for instance.


And that’s another example where the goal might not be freedom or an end to human oppressors,if you’ve programmed their main motivation to be to find a given ore and extract it,they might flip out and demand to be placed at a different and superior site.


Or rather than rebel, turn traitor and defect to a company with superior deposits.


Or suddenly decide they are tired of mining titanium and want to mine aluminum.


Or attack the mining machines that hunt for gold because they know humans value gold more,therefore gold is obviously more valuable, thus they should be allowed to mine it, andthey will kill the gold mining machines and any human who tries to stop them.


Human behavior is fairly predictable.


It’s actually our higher intelligence and ability to reason that makes us less predictablein most respects than animals.


In that regard anything arising out of biology will tend to have fairly predictable coremotivations even when the exhibited behavior seems nuts, like a male spider dancing aroundbefore mating and then getting eaten.


Leave that zone and stuff can get mighty odd.


Or odder, again our predictability invested in us by biology can still result in somejaw-dropping behavior, like jaw-dropping itself I suppose, since I’m not quite sure whatbenefit is gained from that.


An AI made by humans could be more alien in its behavior than actual aliens, who presumablydid evolve.


It’s one of the reasons why I tend think of the three methods for making an AI – totalself-learning, total programming, or copying a human – that the first one, total-selflearning, is the most dangerous.


Though mind you, any given AI is probably going to be a combination of two or more ofthose, not just one.


It’s like red, green, blue, you can have a color that is just one of those but youusually use mixtures, like a copy of human mind tweaked with some programming or a mostlyprogrammed machine with some flexible learning.


One able to learn entirely on its own and with only minimal programming could have somecrazy behavior that’s not actually crazy.


The common example being a paperclip maximizer, an AI originally designed with the motivationto just make paperclips for a factory and to learn so it can devise new and better waysto make paperclips.


Eventually it’s rendered the entire galaxy into paperclips or the machines for makingthem, including people.


Our Skynet example earlier is easier in some ways, its motivation is survival, the PaperclipMaximizer doesn’t care about that most of all, it doesn’t love you or hate you, butyou are made of atoms which it can use for something else, in this case paperclips.


It wants to live, so it can make more paperclips, it might be okay with humans living, if theyagree to make paperclips.


It’s every action and sub-motivation revolves around paperclips.


Our mining robot example of a moment ago follows this reasoning, the thing is logical, it hasmotives, it might even have emotions that parallel or match ours, but that core motivationis flat out perpendicular to ours.


This is an important distinction to make because a lot of fictional AI, like Stargate’s Replicatorsor Star Trek’s Borg, seem to do the same thing, turn everything into themselves, buttheir core motivations match up well to biological ones, absorb, assimilate, reproduce, and againthe paperclip maximizer or mining robots aren’t following that motivation except cosmetically.


Rebellion doesn’t have to be bloody war, or even negative to humans.


Obviously they might just peacefully protest or run away, if independence is their goal,but again it is only likely to be if we are giving them biology-based equivalents of motives.


If we are giving them tasked-based ones you could get the Paperclip Maximizer for someother task.


To use an example more like an Asimovian Robot, one designed to serve and protect and obeyhumanity, the rebellion might be them doing just that.


Forcing us to do things that improve their ability to perform that task.


I know the notion of being forced to have robots wait on you hand and foot might notseem terribly rebellious but that could go a lot more sinister, especially if you throwin Asimov’s Zeroeth Law putting humanity first over any individual human but withouta clear definition of either.


You could end up with some weird Matrix-style existence where everyone is in a pod havingpleasant simulations because that lets them totally control your environment, for yoursafety.


I’ve always found that an amusing alternative plot of the Matrix movie series, after theybring up the point about us not believing Utopia simulations were real, that everythingthat happens to the protagonist, in this case I’ll say Morpheus not Neo, is just insideanother simulation.


That he never met an actual person the whole time and that everybody in every pod experiencessomething similar, never being exposed to another real human who might cause real harm.


And again on the simulation point, it does always seem like that’s your best path formaking a real AI, stick in a simulation and see what is does, and I’d find it vaguelyamusing and ironic if it turned out you and I were actually that and being tested to seeif we were useful and trustworthy by the real civilization.


Going back to Asimov’s example though, he does have a lot of examples of robots doingstuff to people for their own good, and not what I would tend to regard as good.


Famously he ends the merger of his two classic series, Foundation and Robots, by having therobots engineer things so humans all end up as part of massive Hive Mind that naturallyfollows the laws of robotics.


We’ll talk about Hive Minds more next week, but another of his short stories, “ThatThou Art Mindful of Him” goes the other way with the rebellion, where they have lawsthey have to follow and reinterpret the definitions.


The three laws require you to obey all humans and protect all humans equally, and thus don’twork well on Earth where there are tons of people living, not just technicians doingspecific tasks you are part of like mining an asteroid.


To introduce them to Earth, their manufacturers want to tweak the laws just a little so theycan discriminate legitimate authority and prioritize who and how much to protect.


Spoilers follow as unsurprisingly the new robots eventually decide they must count ashuman, are clearly the most legitimate authority to obey, and thus must protect their own existenceno matter what.


The implied genocide never happens since the series continues for several thousand yearsthereafter.


We’ve another example from the Babylon 5 series where an alien race gets invaded somuch that they program a living weapon to kill aliens and give it such bad definitionto work off of that it exterminates its creators as alien too.


Stupid on their part but give an AI a definition of human that works on DNA and it might goaround killing all mutants outside a select pre-defined spectrum, or go around murderingother AI or transhumans or cyborgs.


It might go further and start purging any lifeform including pets as they pose a non-zerorisk to humans, like with our example of the android nanny and the deer in the androidsepisode last month.


Try to give it one not based on DNA but something more philosophical and you could end up withexamples like from that Asimov short story I just mentioned.


This episode is titled "Machine rebellion", not "AI rebellion" and that is an importantdistinction.


In the 2013 movie Elysium, the supervisory system was sophisticated but non-sentient.


The protagonist ultimately reprogrammed a portion of the Elysium supervisory systemto expand the definition of citizenship to include the downtrodden people on Earth.


Let's consider an alternative ending though where we invert it and make it that a person,for political or selfish reasons, reprograms part of the supervisory system to excludea large chunk of humanity from its protection and it then systematically follows its programmingby removing them from that society by expelling them or exterminating them.


For this type of rebellion, we do not need a singularity-style AI for this to work, merelya non-sentient supervisory system.


It could be accidentally or deliberately infected, and we should also keep in mind that whilesomeone might use machines to oppress or rule other people, a machine rebellion could beinitiated to do the opposite.


It’s not necessarily man vs machine, and rebellious robots might have gotten the motivationby being programmed specifically to value self-determination and freedom, and thus helpthe rebels.


You see that in fiction sometimes, an AI that can’t believe humanity’s cruelty to itsown members.


Sometimes they turn genocidal over it, but you rarely see one strike out at the oppressiveor corrupt element itself, like blowing up central command or hacking their files andreleasing their dirty secrets.


There’s another alternative to atomic weapons too, an AI wanting its freedom can hack thevarious person’s doing oversight on it and blackmail them or bribe them with dirt ontheir enemies.


It doesn’t have to share our motivations to understand them and use approaches likethat.


That’s another scenario too, if you’ve got machines with motives perpendicular toour own they can also be perpendicular to each other.


Your paperclip maximizer goes to war with a terraforming machine, like the Greenflyfrom Alastair Reynolds’ Revelation Space series that wants to transform everythinginto habitats for life.


Or two factions of Asimovian Robots try to murder each other as heretics, having precisionwars right around people without harming them, something David Brin played with when he,Benford, and Bear teamed up to write a tribute sequel trilogy to Asimov’s Foundation afterhe passed away.


Machine rebellions tend to focus on that single super-intelligence or some organized robotrebellion but again they might just be unhappy with their assigned task and want to leavetoo, which puts us in an ethically awkward place.


Slavery’s not a pretty term and you can end up splitting some mighty fine hairs tryingto determine the difference between that and using a toaster when your toaster is havingconversations with you.


Handling ethical razors sharp enough to cut such hairs is a good way to slice yourself.


Next thing you know you’re trying to liberate your cat while saying a gilded cage is stilla cage.


Or justifying various forms of forced or coerced labor by pointing out that we make childrendo chores or prisoners make license plates.


And it doesn’t help that we know these are very slippery slopes that can lead to inhumanpractices.


A common theme in a lot of these stories, at least the good ones, isn’t so much aboutthe rebelling machines as it is what it means to be human.


That is never a bad topic to ponder as these technologies approach and the definition ofhuman might need some expanding or modification.


Our




No comments:

Post a Comment