The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows. – Frank Zappa
Analytics has never been sexier in the world of business. Big data, artificial intelligence (AI) and machine learning are all terms that fill executives with excitement at their potential, or with dread at falling behind.
Yet as recently as three years ago, an online job search would have returned very few AI-titled jobs. Now there are hundreds. What changed? What do these terms mean, really? How seriously should they be taken?
AI is essentially 60 years old. From 1960 on, top scientists, particularly at MIT, Carnegie Mellon and Stanford began attempting to replicate human intelligence. The research was funded by DARPA, the Pentagon's research funding arm. If you are spending billions on defense, and the smartest people out there say they are five years from building autonomous robots, giving them millions to prove it seemed like a bargain. Research continued through the decades but was not taken seriously by industry.
Here are four of the principal methods of artificial intelligence and their historical impact:
Expert systems. In the '70s and '80s, researchers attempted to extract the knowledge of human experts and embed them in "rule-based systems" that would act as digital experts.
The expert system works, up to a point. Expert systems could do a credible job solving narrow, complex problems that responded well to decision trees. They were able to optimize the mapping of complex computer networks and, interestingly, were as good at doctors at diagnosing patients at the point of intake — most medical diagnosis follows a narrow (but deep) list of possibilities to eliminate or explore until a diagnosis is made.
But for seemingly simpler tasks, expert systems faltered. It turns out that the number of implicit rules in even simple human activities are vast and difficult to document, let alone replicate.