He turned to face the machine. “Is there a God?”
The mighty voice answered without hesitation, without a clicking of a single relay.
“Yes, now there is a God.”
Fredric Brown – “Answer”
Five games of Go took place between the 9th and 15th of March. The game of Go is a Chinese strategy board game, created over two and a half thousand years ago. Even though it has simple rules, it is considered more difficult than chess, as the board is much larger, giving the players a wider scope in which to play. Games can last up to six hours. Professional ranking are 1st dan, the lowest ranking, to 9th dan. Those five games in March were between Grandmaster Lee Sedol, a 9th dan from South Korea, and AlphaGo, an artificial intelligence developed by Google DeepMind. AlphaGo ended up winning four of the five matches.
You may be asking yourself what the big deal is.
Go is a notoriously difficult game with a staggering number of board positions and outcomes. There are so many permutations of the board that there are less atoms in the universe than there are Go board layouts. Because this number is so high, a computer can’t brute-force it’s way to a victory, the way it has in the past with chess.
Computers aren’t very smart. At their very core, they are only able to answer yes or no, one or zero, or in actuality, whether there is current passing through an electrical gate or not. It’s referred to as binary. What a computer excels at, thanks to seventy years of electrical engineering innovations, is answering yes or no millions of times in less than a second. Consider a password four digits long. If a computer wanted to crack the code, it would stand at the gate, bang it’s head against the door and yells “0000” and waits for a response. If this is the correct password, great. The computer grants you access. If it’s not the right password, it bangs it’s head against the door again, yells “0001” and again waits for a response. This is how a computer would ‘brute-force’ a solution.
There are roughly ten to the power of eighty variations of the Go board. Here’s what that looks like written out.
No computer has the ability to brute force a number so large. AlphaGo had to learn how to play. It did so by first playing and studying human opponents, and when it became proficient enough, AlphaGo started to play against itself. Within a short amount of time, it played more games than any person alive. With that knowledge, it was able to defeat Lee Sedol. Not only defeat him, but AlphaGo made uncharacteristically inhuman moves, some that were so baffling that Lee Sedol had to get up from the table and take fifteen minutes to regain his composure.
The game of Go represented the last milestone of Artificial Intelligence in the arena of board games. When DeepBlue beat Chess Grandmaster Gary Kasparov in 1997, a machine victory for Go was considered a hundred years away. Five months ago, experts said it would be another ten years before a computer would be able to play at the Grandmaster level. What does that mean for tomorrow?
Here’s some music to begin tomorrow’s celebration.
Computers ability to replace human beings for tasks once thought too complex to be automated are becoming increasingly realistic. Cars that are able to drive themselves are just around the corner. Many jobs consisting of manual labour will be replaced. There will be a technological revolution that will dwarf the industrial revolution.
Where exactly do humans have the edge? What if a machine becomes self-aware, and decides that it’s so much better at work that humans are obsolete? Would it put humans in people zoos? Would it wipe humanity out?
There is one thing a human being has over all artificial intelligence; it’s emotions. One of the necessary things needed to make a decision, specifically an irrational decision, is emotions. If you give a rational task to a computer, something like “Learn Go”, it can do that. A computer isn’t a biological organism though. The human will to live, learn, strive and become better isn’t a logical process, it’s a biological one. It’s the desire to propagate the species genes further onwards.
There was a study done by neuroscientist Antonio Damasio. He studied people who had brain damage, specifically the parts of the brain where emotions were controlled. His test subjects had lost the ability to feel any emotion. What he found is they had no way of making irrational decisions. If they had a choice between chicken and fish for dinner, there was no real rational method of choosing between the two. They knew they had to eat, but became stuck when having to make a choice that had no real impact on the outcome of being fed. As a result, they were unable to come to a decision.
Teaching a computer to play Go is incredibly challenging, but we’ve proven that the human race is capable of that. What may be an impossible task is to give a computer curiosity, drive, ambition. We might be surprised that the first question an AI asks is “Now what?”. At it’s core, no matter how good the computer gets at analyzing a problem, it still will more than likely need to be asked to solve the problem in the first place. A computer, even with intelligence, isn’t driven by a need to be better. I’m not convinced it ever will be.
The terrifying aspect of AI isn’t the artificial intelligence itself, but who happens to be at the helm. AlphaGo made moves out of the scope of human thinking. An AI isn’t bound by any human sense of regulation or morality, unless we program it to. If someone with a lack of foresight asks the computer a question, the computer might come to a solution that is potentially illegal, immoral, and even disastrous. This has already happened, when a online shopping bot, after being given $100 in bitcoin, purchased illicet drugs. Asking an artificial intelligence to “Make me the richest person alive” may result in the computer coming to the conclusion that the easiest way to do this is wipe out every person with more wealth. We have to be very careful about what we ask of a machine that has the potential to do anything. Humans, once again, are at the mercy of our own hubris.
The robot apocalypse is coming, and it won’t be a fight for your lives, it will be a fight for your livelihood. This shouldn’t be a bad thing though. Computers were created to make lives easier, so that people would have more leisure time. This, for many people, hasn’t been the case. The idea of capitalism and the rules of supply and demand, is one that is slowly becoming obsolete. In the digital age, the rules are changing, and the idea of ownership has been challenged for the last decade and a half. “You wouldn’t download a car, would you?”, which is often the cry of anti-piracy legislators. If someone had the resources to do so, you can be certain they would.
To put things bluntly, if we manage this paradigm shift correctly, it could usher in the greatest renaissance the planet has ever seen. If we don’t, the inequality of the world could be gigantic, separating classes of people so thoroughly that they may as well be two different species.
If they do actually create an artificial intelligence, one capable of real, rational thought, then I’m not sure how forward I’m looking to that. The last thing I want to do is deal with my toaster having an existential crisis when I want a bagel.
The Illustrious Mr. Charlton
P.s. the only reason people think that Judgement Day would happen is because humans either subjugate or eradicate every other species on the planet.
P.s.s. once again, the real enemy is MAN!
p.s.s.s. I took some leaps here, but would love to discuss it with anyone in the future.