ARTICLE | DECEMBER 8, 2021 | 2 MIN READ
Artificial intelligence through the decades
More than half of U.S. companies expanded their plans to adopt artificial intelligence during the COVID-19 pandemic, according to PwC. The vast majority (86%) said they now consider AI a “mainstream” technology.
It took decades to reach this point. Today’s AI business applications and capabilities—which organizations use to automate processes, improve decision-making through data analytics, and engage more effectively with customers and employees—emerged from countless, often interconnected advances in computing, software programming, robotics, and research.
Here are several of the most significant moments in the evolution of enterprise AI.
Mycin, an early form of AI known as an “expert system,” uses 450 if/then rules to diagnose bacterial blood infections. Developed by Stanford University researchers, Mycin is more accurate than medical students or practicing doctors.
Japanese computer scientist Kunihiko Fukushima proposes the Neocognitron, an early computer-vision system modeled on the human visual cortex. The system is considered the first convolutional neural network, the basis for later advances in computer vision, text recognition, and natural language processing (NLP).
Digital Equipment Corporation saves an estimated $40 million per year with one of the first business applications to use expert systems. DEC’s program, released in 1980 and called “R1,” could determine the most optimal product configurations for the company’s computer systems, relieving sales teams of manual analysis and other tasks.
IBM’s supercomputer, Deep Blue, beats chess world champion Garry Kasparov in a pair of six-game matches. Capable of evaluating 200 million chess positions per second, Deep Blue validates the premise that computer systems trained on very specific tasks could outperform humans, and influenced machine learning techniques used in financial modeling, risk analysis, data mining, and other enterprise use cases.
“Stanley,” an autonomous test vehicle designed by a team of Stanford researchers, wins a 132-mile driverless-car race sponsored by the Defense Advanced Research Projects Agency, or DARPA. The technologies (and engineering talent) showcased in the competition influence subsequent AV development at Google, Waymo, and other companies; related applications for the enterprise later emerge in manufacturing, mining, retail, and other sectors.
ImageNet, a first-of-its-kind database of more than 3 million images developed by a team led by Stanford computer scientist Fei-Fei Li, helps train computers to understand visual information. The team launches a competition among researchers to expand the database to train their own computer-vision algorithms. The project paves the way for deep-learning advances in autonomous vehicles and facial recognition.
A Google research team creates “word2vec,” a technique in natural language processing (NLP) that can learn the context of words in documents, including semantics and syntax. The system becomes the basis for subsequent business applications such as automating survey responses and recommending products.
Nonprofit research group OpenAI releases GPT-3, the world’s most powerful NLP model to date, enabling much more complex interactions with computers using language instead of code. Emerging enterprise applications of GPT-3 include near human-level writing, computer-generated code based on user descriptions, and using language to search databases.