Me and a couple people here at the school have started a robotics club. Why robots?
I promise it’s not a robot army. I swear.
Before I started school, over a year ago in the summer of 2018, I went to the campus grounds to chat with one of the teachers here who took the time out of his schedule to talk to me about his class. I did some digging before hand and I found out he had previously worked on a satellite that was going to be sent into orbit. When I asked him about it, he said that he joined the group when he transferred over to the University of Victoria from Camosun. He was so successful that when he completed his degree, the University gave him $20,000 to pursue his masters. They even won an award for the project. You can read about it here:
Anyways, they don’t have a satellite club here at the college and the truth is most of the students here are green and inexperienced, so building a satellite might be a little out of the scope for the student body at a school this small.
Turns out this is actually pretty hard.
Now, there is a 3D printing club that operates out of the school, and it’s pretty sweet. The only issue was the fact that the 3D printer was pretty much built when I started last year. I was terrified of stepping into a club where they were hungry for 3D models, only to find out I did that kind of thing for over a decade. After a decade of drafting, the last thing I wanted to do was have someone lean over me shoulder and tell me what they wanted built.
So I talked to some of my friends here at the school, and within about five minutes, we had enough interest from enough people to start up a club. This also means that we’ll be privy to a $400 budget from the school to build you a robot. Can we design a robot that will recognize garbage and pick it up? Will we able to create a swarm of bots that could build a structure? What what will $400 buy us?
Well, not a whole lot. Nothing sophisticated, at least yet. What you can get is some rudimentary components and get some motors and some wheels and make a bot that avoid running into walls. And then we can teach it to make decisions on which way to turn, and how to avoid obstacles. Then we mass produce them, and then arm them with lazers, and march across the planet instilling my plan for a new world order. Under my iron wing, we’ll build a utopia together! My detractors shall be buried under the wheels of my doom machines!
This is the most relevant picture I could find.
Truthfully, we will building something more like this…
But a guy can dream of mechanized world domination, right?
The Illustrious Mr. Charlton
p.s. Apparently my spelling of lazer is incorrect, but I absolutely refuse to spell it with anything other than a “z”.
p.s.s. To be frank, ruling the world seems like big pain in the ass. It’d be like the old game ‘King of the Castle’, except the little hill you’re playing on is made of garbage and it’s on fire.
p.s.s.s. See, that’s a GREAT idea for a robot. It could put out trash fires. Why am I not in charge of things?
p.s.s.s.s. That’s right, the whole ‘pain in the ass’ thing.
The mighty voice answered without hesitation, without a clicking of a single relay.
“Yes, now there is a God.”
Fredric Brown – “Answer”
Five games of Go took place between the 9th and 15th of March. The game of Go is a Chinese strategy board game, created over two and a half thousand years ago. Even though it has simple rules, it is considered more difficult than chess, as the board is much larger, giving the players a wider scope in which to play. Games can last up to six hours. Professional ranking are 1st dan, the lowest ranking, to 9th dan. Those five games in March were between Grandmaster Lee Sedol, a 9th dan from South Korea, and AlphaGo, an artificial intelligence developed by Google DeepMind. AlphaGo ended up winning four of the five matches.
You may be asking yourself what the big deal is.
Go is a notoriously difficult game with a staggering number of board positions and outcomes. There are so many permutations of the board that there are less atoms in the universe than there are Go board layouts. Because this number is so high, a computer can’t brute-force it’s way to a victory, the way it has in the past with chess.
Computers aren’t very smart. At their very core, they are only able to answer yes or no, one or zero, or in actuality, whether there is current passing through an electrical gate or not. It’s referred to as binary. What a computer excels at, thanks to seventy years of electrical engineering innovations, is answering yes or no millions of times in less than a second. Consider a password four digits long. If a computer wanted to crack the code, it would stand at the gate, bang it’s head against the door and yells “0000” and waits for a response. If this is the correct password, great. The computer grants you access. If it’s not the right password, it bangs it’s head against the door again, yells “0001” and again waits for a response. This is how a computer would ‘brute-force’ a solution.
There are roughly ten to the power of eighty variations of the Go board. Here’s what that looks like written out.
No computer has the ability to brute force a number so large. AlphaGo had to learn how to play. It did so by first playing and studying human opponents, and when it became proficient enough, AlphaGo started to play against itself. Within a short amount of time, it played more games than any person alive. With that knowledge, it was able to defeat Lee Sedol. Not only defeat him, but AlphaGo made uncharacteristically inhuman moves, some that were so baffling that Lee Sedol had to get up from the table and take fifteen minutes to regain his composure.
The game of Go represented the last milestone of Artificial Intelligence in the arena of board games. When DeepBlue beat Chess Grandmaster Gary Kasparov in 1997, a machine victory for Go was considered a hundred years away. Five months ago, experts said it would be another ten years before a computer would be able to play at the Grandmaster level. What does that mean for tomorrow?
Here’s some music to begin tomorrow’s celebration.
Computers ability to replace human beings for tasks once thought too complex to be automated are becoming increasingly realistic. Cars that are able to drive themselves are just around the corner. Many jobs consisting of manual labour will be replaced. There will be a technological revolution that will dwarf the industrial revolution.
Where exactly do humans have the edge? What if a machine becomes self-aware, and decides that it’s so much better at work that humans are obsolete? Would it put humans in people zoos? Would it wipe humanity out?
There is one thing a human being has over all artificial intelligence; it’s emotions. One of the necessary things needed to make a decision, specifically an irrational decision, is emotions. If you give a rational task to a computer, something like “Learn Go”, it can do that. A computer isn’t a biological organism though. The human will to live, learn, strive and become better isn’t a logical process, it’s a biological one. It’s the desire to propagate the species genes further onwards.
There was a study done by neuroscientist Antonio Damasio. He studied people who had brain damage, specifically the parts of the brain where emotions were controlled. His test subjects had lost the ability to feel any emotion. What he found is they had no way of making irrational decisions. If they had a choice between chicken and fish for dinner, there was no real rational method of choosing between the two. They knew they had to eat, but became stuck when having to make a choice that had no real impact on the outcome of being fed. As a result, they were unable to come to a decision.
Teaching a computer to play Go is incredibly challenging, but we’ve proven that the human race is capable of that. What may be an impossible task is to give a computer curiosity, drive, ambition. We might be surprised that the first question an AI asks is “Now what?”. At it’s core, no matter how good the computer gets at analyzing a problem, it still will more than likely need to be asked to solve the problem in the first place. A computer, even with intelligence, isn’t driven by a need to be better. I’m not convinced it ever will be.
The terrifying aspect of AI isn’t the artificial intelligence itself, but who happens to be at the helm. AlphaGo made moves out of the scope of human thinking. An AI isn’t bound by any human sense of regulation or morality, unless we program it to. If someone with a lack of foresight asks the computer a question, the computer might come to a solution that is potentially illegal, immoral, and even disastrous. This has already happened, when a online shopping bot, after being given $100 in bitcoin, purchased illicet drugs. Asking an artificial intelligence to “Make me the richest person alive” may result in the computer coming to the conclusion that the easiest way to do this is wipe out every person with more wealth. We have to be very careful about what we ask of a machine that has the potential to do anything. Humans, once again, are at the mercy of our own hubris.
The robot apocalypse is coming, and it won’t be a fight for your lives, it will be a fight for your livelihood. This shouldn’t be a bad thing though. Computers were created to make lives easier, so that people would have more leisure time. This, for many people, hasn’t been the case. The idea of capitalism and the rules of supply and demand, is one that is slowly becoming obsolete. In the digital age, the rules are changing, and the idea of ownership has been challenged for the last decade and a half. “You wouldn’t download a car, would you?”, which is often the cry of anti-piracy legislators. If someone had the resources to do so, you can be certain they would.
To put things bluntly, if we manage this paradigm shift correctly, it could usher in the greatest renaissance the planet has ever seen. If we don’t, the inequality of the world could be gigantic, separating classes of people so thoroughly that they may as well be two different species.
If they do actually create an artificial intelligence, one capable of real, rational thought, then I’m not sure how forward I’m looking to that. The last thing I want to do is deal with my toaster having an existential crisis when I want a bagel.
The Illustrious Mr. Charlton
P.s. the only reason people think that Judgement Day would happen is because humans either subjugate or eradicate every other species on the planet.
P.s.s. once again, the real enemy is MAN!
p.s.s.s. I took some leaps here, but would love to discuss it with anyone in the future.