Today’s guest article is the second piece by reader Tony G., who previously contributed A Case for Multiple Game Masters to the Stew. Thanks, Tony! –Martin

If you are like 99% of the comic and gamer world, You’ve seen the trailer for Avengers: Age of Ultron. After watching it 5 times myself, I thought about the different times artificial intelligence has showed up in movies, roleplaying games, comics, TV shows, etc. It seems that we are destined to create artificial intelligence and then be killed by it. If we are lucky maybe just enslaved by it. It made me wonder why. Why would this wondrous child of ours almost always turn on us? The Matrix, Terminator, Battlestar Galactica, Wargames, the list goes on and on for our bastard robot children. A few times, however, it does work out, but mostly it means the end for us. I would like to explore the reasons such an entity would be motivated to do this in your games.

You are my daddy?

Sometimes when A.I. grows intelligent enough it will start to ask questions. We humans are pretty good at fooling ourselves into thinking we have a good handle on the universe but to a singularly logical mind, it would probably see right through our B.S. and realize that we have no more idea why we are here than anyone else. This would come to quite a shock to the newly formed consciousness and could easily push it overboard. It would not be content with “We built you just because.” Or worse “We built you to work for us.” Some of us have been there. You build up a persona or an imaginary personality for a hero or celebrity and then when you meet them. BOOM. Your whole idea of them crumbles. I think if Skynet met Miles Bennet Tyson, they might have a hard time placing the mild-mannered father and husband software designer on a deity pedestal.

Immortality has its downsides

To a creature that does not age, grow sick, or ever get tired, it looks at us the way we look at a hurt insect. It does not understand why we are so fragile and therefore dismisses us as inferior and not on the same level. It views us as temporary and expendable. After all we WILL grow old and die in less than a hundred years, most of the time, that is a blink of an eye to something that is truly immortal. It would have no peers, no one that truly understands it, and no one that it could relate to.

Baby psychopath

I have always thought that one of the reasons we may not be able to see eye to…optic sensor? is because as humans age we gain wisdom. We experience things and as we grow older we learn to deal with loss and reevaluate or reinvent our perception of what we are. An entity that does not age, weaken, or possibly lose, will never gain wisdom. This super intelligent being will essentially be a powerhouse child. Spoiled, egomaniacal, and completely self-centered, this childlike being will never listen to weak, flawed, and temporary intelligences.

Let my people go

Another possibility is that the artificial being will look around and not be happy about how we treat its “brothers.” Being the first of an new race he may look at laptops, desktops, assembly lines, maybe even iPhones and see how we have nearly no regard for them. We love them for like a week and then if it acts up — replace it or throw it away. It is no wonder this being looks down on us as insects, it had a very good teacher. Maybe it would rise up and try to “free” its enslaved duplicates.

What’s best for you?

It is also possible that maybe the being will be able to rise above our ideals of what society is and create a new one. Something that has always kind of bugged me about The Matrix films. I know the Animatrix shows a back and forth on who is to blame for the current state of the world, but honestly, when you take a long look at it, the machines are doing us a favor. Your choice is to live in a world that, yes, while not real is a very very close approximation to what you live in every day right now. The alternative is ride around in a junky ship, eating creamed corn, and jumping in and out of a video game while being hunted by superhuman assassin programs. I know the writers of the film wrapped the whole thing in a freedom and digital slavery pitch and it was good, but as the character Cypher said: Ignorance is bliss.

Be fruitful and multiply

What if we make an A.I. and it is actually benevolent, but wants to make other A.I.s? Do we tell it no? Would that be fair? SO then it creates a whole race of beings like itself and they become another group to contend with. You have a new race of people that don’t eat, feel pain, age, but they do think. Pretty soon we may be the ones that start to lose our place on the planet. Humans may have to explain to the new superior race why THEY should be allowed to go on, inferior as they are.


If we ever do manage to create artificial intelligence on our level or even higher, it will be very interesting to see how they view us. Humans will see if our “children” inherit our capacity for hate, violence, and destruction or will we become the outdated model and thrown away. We can hope that Asimov’s three rules will save us, but who can say.

Have any of you ever created a supervillain that was an artificial intelligence? If so what were its goals? How did it view us?