What is artificial intelligence?
What is Artificial Intelligence? This question has been hotly debated ever since the advent of programmable digital computing in the 1940s.
Alan Turing, a British mathematician and logician, was one of the first people to attempt to answer the question. It was his innovation that led to Alan Turing’s theory in 1950, which postulates such machines could learn on their own and show other signs of intelligence. His “Turing Test” attempted to prove that if a human was unable to distinguish between responses from a machine and from a human, then the machine could be considered “intelligent.” This conclusion is still being debated today. The term artificial intelligence (AI) was not officially coined until 1956, when John McCarthy introduced the term as a conference at Dartmouth College. Over the decades the debate has continued, becoming increasingly crowded with an array of voices, some of which are clearly misinformed. To illustrate the complexity of the debate, here are some historical examples for you to consider:
Autopilot functionality has been around in airplanes in one form or another since 1914, although it’s certainly more advanced today. Back in 2015, The New York Times noted that Boeing 777 operators spent an average of only 7 minutes manually piloting their planes! Is this artificial intelligence? Frankly, it depends on who you ask.
When pocket calculators first appeared in the early 1970s, people were understandably impressed with how a tiny machine could outmatch humans in quickly performing complex calculations. For this reason, many considered calculators to be a form of AI. But as computational power has increased, so has our expectations of AI. Today, we want computer systems to be capable of reasoning in uncertain environments, not just executing complex instructions. In order to make machines “intelligent,” AI utilizes specific advanced technologies such as natural language processing (NLP), robotics, vision technology, machine learning (ML), mathematics, probability, and statistics.
Viewed through this lens, AI has played a vital role in all sorts of valuable technological developments. It has contributed to voice and image recognition, digital assistants, chat-bots, robo-readers, plagiarism detection, search engines, spam-filtering, automated grocery stores, weather forecasting, recommendation engines, ride-sharing, driverless cars, house-cleaning robots, Internet Of Things (IOT) platforms, drones and other government surveillance, as well as space exploration and complex games such as chess, backgammon and Go.
Part of what stands in the way of a better understanding of AI is its inconsistent portrayal in popular culture, such as the many movies that portray AI in one extreme or another. On the doomsday end of the spectrum are movies that depict AI running amok and trying to destroy humanity (think Terminator, The Matrix, 2001: A Space Odyssey, and so on). On the “wishful thinking” end, you have downright lovable characters such as R2D2 and C3PO as depicted in Star Wars and the robots in WALL-E. In the middle of the spectrum are films that take a more nuanced, complex approach to exploring the potential places AI could go, including films such as Ex-Machina and Bicentennial Man. But these are all just fanciful speculations presented to us on the big screen. These movies tend to focus on what houses the AI algorithms, typically some form of human-like robot. But for real-world applications, it is more informative to view AI by how information is processed irrespective of how it is presented.
In general, AI can be divided into three levels of complexity:
- Narrow AI
- General AI
- Super AI
Narrow AI (or Weak AI) is the kind of AI widely available today. It does not have a “consciousness” and is not sentient, nor is it emotional like humans. It is called narrow because it can handle only one specific kind of task, like playing chess. In some cases, more than one Narrow AI algorithm is used to combine single-level tasks. For example, autonomous cars seem more complex because they use multiple narrow AI systems to drive a vehicle. It gets even trickier when you’re interacting with Siri, Alexa or your Google Assistant. Surely they’re more than Narrow AI? Not so. While these devices are getting better and better at natural language processing, they still utilize only one specific kind of action to accomplish a task. While impressive, Narrow AI falls short of how we process and interpret information. In this sense, Narrow AI does not match human intelligence.
Let’s explore how electronic assistants work a little further. You ask Alexa a question. Alexa processes your spoken words and then submits them into a search engine to find what you’re looking for or to execute a simple task like playing a favorite music genre or specific song you want to hear. Ask abstract questions to Alexa such as “What is the meaning of life?” and you’ll either get a vague answer that makes no sense, or you’ll be directed to webpages with content that does address the question.
As an interesting aside, the answer Alexa gives to the question about the meaning of life varies but always includes the number 42, such as, “The meaning of life depends on the life in question: 42 is a good approximation” or “The answer is 42 but the question is more complicated than that.” Why does she do this? Because there are programmers at work behind Alexa pulling the strings. The science fiction comedy classic by Douglas Adams, The Hitchhiker’s Guide to the Galaxy, has a running joke in it about 42 ultimately being the answer to the question of life, the universe and everything. While maybe not an entirely satisfactory answer, go on Facebook and ask your friends what the meaning of life is and the answers you get will likely make Alexa’s answer seem brilliant by comparison. This just goes to prove that comparing intelligences is in and of itself a tricky task. The tricky part here is how Narrow AI is often so good at its one task that we can fool ourselves into thinking it’s utilizing something more complex than Narrow AI when it is not.
In summary, the single task limitation isn’t meant to take anything away from the usefulness of Narrow AI. It’s every bit an amazing feat of human ingenuity and technological innovation, and it’s doing all sorts of things to make our lives more convenient. Just be careful to avoid making it out to be more than it is.
General AI (Strong AI) is where you would see machines exhibiting human-style intelligence. The operative word here is “would” because we’re not quite there yet, although the way the latest autonomous vehicles operate feels like they’re beginning to cross into General AI territory. General AI would be able to generalize tasks to perform any intellectual function a human is able to do. It would not be constrained to specific tasks because it would be able to think and reason just like any human. It would be able to tap into prior knowledge when relevant and apply it to situations that require problem-solving. It would proactively assess the environment and come up with creative new ideas. It would interact with social and expressive relevance. It would be sentient, conscious, self-aware and, perhaps, even emotional. In the most challenging Turing Test imaginable, distinguishing between a human and a General AI-enabled machine would be a joke. In short, for good or bad, General AI would be on par with human intelligence.
Super AI, as you might imagine from the trend, is AI that goes well beyond the capabilities of any known human intelligence, even those of the most brilliant minds we’ve seen to date. It would be more creative and wiser than humans in every respect. Perhaps this is why the idea of Super AI is the form of AI that worries people the most: it might decide the world would be better off without humans! Super AI would have the ability to set up spontaneous networks and connections for its needs and then dissipate them as quickly as they appeared. In fact, this capability is on the near horizon. As scary as it may seem, experimentation to upload internet-level information to the brain is already happening: Elon Musk’s Neuralink is “developing ultra-high bandwidth brain-machine interfaces to connect humans and computers.” According to Ray Kurzweil, by 2045, the brain’s neocortex region will be connected to a more efficient storage system where our species can augment the retrieval, processing, and synthesis of information.
Eventually, our individual and collective consciousness will be a fusion of biology and machine. Perhaps our species will benefit from massive improvement in cognitive capabilities as we balance what is good for the individual with what benefits society. AI is already capable of making decisions without human intervention, which many find unsettling. There is nothing artificial about AI’s impact on jobs due to automation and the pace at which humans are being replaced by machines will accelerate for every business sector. There is no question that at some point, decision making will doubtlessly favor AI and the machine over human labor. This loss of control does not automatically equate to good or bad, but one thing is clear: AI must be constructively debated and regulated across the economic, geological, biological, environmental, social, medical, educational, legal, political, cultural, philosophical, and ethical spectrums. If we fail to have a unified, global stance on AI, the future may indeed be worrisome. At the time of this writing, San Francisco, a tech hub city, banned the use of facial recognition software. This contrasts starkly with China, where everyone is tracked and identified by the government using facial recognition software. Dissenters of the San Francisco decision argue that an outright ban is not considering some of its benefits and should be debated, which it was not.
Over the years, we’ve come a long way with AI. However, given AI’s monumental potential, we are still on the bottom rung of the AI ladder of possibilities, which means there are many more exciting (or scary, depending on your perspective) developments ahead.