BY THOMAS FRISBIE, CFA
Chinese technologist Kai-Fu Lee sees a very big wave of change coming, “tsunami-sized”, that will engulf the world economy within the next two decades. Lee believes that this tsunami will lead to the alteration or elimination of 40-50% of all U.S. jobs by the year 2030. He expects this wave to enrich those who know how to exploit it but stress those who do not, leading to even greater income inequality and political polarization.
Consulting firm PwC sees the same wave coming, and it estimates that the wave will add $16 trillion to global growth by 2030. But like Lee, PwC expects the benefits to be divided disproportionately, with China experiencing half of that growth and North America another quarter. This division of the benefits will leave Europe and the rest of the world sharing the remaining crumbs.
MIT digital economics professors Erik Brynjolfsson and Andrew McAfee also see the tsunami, and they think it may become one of the four largest technology waves since 1765, one that fundamentally transforms the way that people work and live. In their view, the first three technology tsunamis that changed the world were (1) the invention of the steam engine in 1765, (2) mass electrification in the late 1800s and (3) the communications revolution based upon the ubiquity of personal computers, networking technology and mass access to the internet.
The “fourth tsunami”, expected by Lee, PwC, the MIT professors and many technologists, is the mass substitution of labor with artificial intelligence (AI). AI is simply software that permits a computer to sense and interact with the world around it by recognizing patterns buried within vast mountains of digital data. While still unable to perform the human brain’s ability to imagine and create, AI programs either already or soon will be able to surpass humans in the detection of patterns in speech and images. Early signs of AI’s pattern-detection capabilities include speech recognition and translation, facial recognition and the ability of driverless vehicles to navigate city streets. Future AI programs may outperform humans in the realms of reading medical images or evaluating insurance claims. While the first two technology waves replaced muscles with machines and drove labor productivity higher, this wave is a threat to the highly educated and uneducated alike. Not all college degrees will be defenses against the encroachment of very, very smart machines.
AI actually has been around in embryonic form since the 1950s. Early AI pioneers were divided in their approach to making computers “intelligent” instead of just exceptionally fast calculators. Early AI developers split into two camps, one a “rules based” system and the other a “neural networks” approach. To illustrate the difference, consider the challenge of teaching a computer to correctly identify a cat among thousands of digital images. A “rules based” system might instruct the computer that a cat has (1) four legs, (2) a round face, (3) two triangular, pointy ears (4) that sit on top of the round face. The computer would label any image meeting all of those criteria a cat; if any criteria are missing, it is “not a cat”. The neural network approach would mimic early human learning, feeding the computer millions of images of cats and “not cats”, and then let the computer decide what parameters most successfully identified cats and only cats from the multitude. As the computer performed its task, correctly and sometimes incorrectly, day by day, it would continue to refine its expertise at identifying cats. Neural network computing helps the computer “learn” through observation and experience, similar to a human child. The big problem with neural network AI is that it requires the processing of massive amounts of data with massive amounts of processing power.
Over the past 30 years, computing power has indeed increased massively. (Your smartphone’s processing power is millions of times greater than the processing power of the NASA computers that guided Apollo 11 to the moon.) AI developers have capitalized on this windfall to write programs to defeat human champions of the world’s two most widely played strategy games. In 1997, IBM’s chess-playing AI program Deep Blue defeated world chess champion Garry Kasparov in a match dubbed “The Brain’s Last Stand”. Nevertheless, Deep Blue was a rather primitive rules-based system that combined heuristics (strategy rules of thumb) supplied by chess masters with the computer’s ability to evaluate thousands of moves and countermoves instantaneously. Basically, Deep Blue was a chess-playing sledgehammer, not a savant.
In 2012, a team of AI scientists led by Geoffrey Hinton made a breakthrough in the way neural networks can detect patterns and develop strategies, a breakthrough they dubbed “deep learning”. Incorporating the technology behind “deep learning”, Google developed a program called AlphaGo with the goal of defeating the world’s best players of Go. Go is an ancient Chinese game involving two players who place either black or white stones on a 19-by-19 squares board, with the victor encircling the opponent. While simple in description, Go is deceptively complex, and the number of possible positions on a Go board exceeds the numbers of atoms in the universe! By 2016, with 280 million Chinese watching, AlphaGo defeated Korean Go master Lee Sedol. One year later, AlphaGo defeated the Chinese reigning world champion, Ke Jie. In its victory, AlphaGo showed the ability to recognize and counter every shift in Ke Jie’s strategy. In the words of Kai-Fu Lee, AlphaGo systematically “dismantled” the world champ.
Where does AI development go from here, and what jobs are at risk from the newly smart machines? We will cover the topic in our next blog post, The Fourth Tsunami (Part 2).