The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows. – Frank Zappa
A short history of artificial intelligence
As recently as three years ago, an online job search would have returned very few AI-titled jobs. Now there are hundreds. What changed?
By Isaac Cheifetz
Analytics has never been sexier in the world of business. Big data, artificial intelligence (AI) and machine learning are all terms that fill executives with excitement at their potential, or with dread at falling behind.
Yet as recently as three years ago, an online job search would have returned very few AI-titled jobs. Now there are hundreds. What changed? What do these terms mean, really? How seriously should they be taken?
AI is essentially 60 years old. From 1960 on, top scientists, particularly at MIT, Carnegie Mellon and Stanford began attempting to replicate human intelligence. The research was funded by DARPA, the Pentagon's research funding arm. If you are spending billions on defense, and the smartest people out there say they are five years from building autonomous robots, giving them millions to prove it seemed like a bargain. Research continued through the decades but was not taken seriously by industry.
Here are four of the principal methods of artificial intelligence and their historical impact:
Expert systems. In the '70s and '80s, researchers attempted to extract the knowledge of human experts and embed them in "rule-based systems" that would act as digital experts.
The expert system works, up to a point. Expert systems could do a credible job solving narrow, complex problems that responded well to decision trees. They were able to optimize the mapping of complex computer networks and, interestingly, were as good at doctors at diagnosing patients at the point of intake — most medical diagnosis follows a narrow (but deep) list of possibilities to eliminate or explore until a diagnosis is made.
But for seemingly simpler tasks, expert systems faltered. It turns out that the number of implicit rules in even simple human activities are vast and difficult to document, let alone replicate.
Classic machine learning. As it became apparent that computers could not be force-fed information and respond consistently with actionable intelligence, more ambitious efforts were made to create computers that would learn from experience and become incrementally smarter, like a child learning.
Here too, it quickly became apparent that human cognition is complex to the point that we really don't understand it, let alone how to model it. So artificial intelligence was viewed as a noble failure.
But there was always a sense that the research investments in AI were paying off. The efforts at defining rules led to the development of Object Oriented Programming, a foundational element of modern programming. And rule-based systems became subsumed under the category of algorithms, central to data science today.
An old joke in the AI community alluded to this pattern — "AI is the stuff we can't do yet" Still, the goal of autonomous intelligence was not close to being realized.
Behavioral robotics. As it became clear that modeling the operations of human brains was beyond current science, a professor at MIT, Rodney Brooks, turned the problem on its head: Why not apply machine learning to imitating the simplest of creatures — bugs, even amoebas? Those are simple enough to understand.
This "bottom up" vs. "top down" approach to artificial intelligence proved successful and dovetailed nicely into research in the '80s and '90s on complexity theory, the study of how simplicity naturally coalesces into complexity in nature.
Deep learning. This brings us to artificial intelligence today, and its chief success, Deep learning. Deep learning takes advantage of the low cost of IT technology and the massive amount of data produced by the internet and attached devices. What was brittle 30 years ago when machine learning and neural networks produced shallow results, deep learning now allows the system to run far more iterations and learn more rigorous lessons from the data.
As for the question you are all really pondering: Will computers become our benevolent overlords? They are unlikely to become our overlords, but if they do, they are unlikely to be benevolent.
Isaac Cheifetz is a Minneapolis-based executive search consultant focused on leadership roles in analytics and digital transformation. Go to catalytic1.com to read past columns or to contact him.
about the writer
Isaac Cheifetz
Elon Musk and Vivek Ramaswamy propose cutting $2 trillion in spending from the federal budget. Here’s how to understand some really big numbers.