More than half of U.S. companies expanded their plans to adopt artificial intelligence during the COVID-19 pandemic, according to PwC. The vast majority (86%) said they now consider AI a “mainstream” technology.
It took decades to reach this point. Today’s AI business applications and capabilities—which organizations use to automate processes, improve decision-making through data analytics, and engage more effectively with customers and employees—emerged from countless, often interconnected advances in computing, software programming, robotics, and research.
Here are several of the most significant moments in the evolution of enterprise AI.
1974: Expert system outperforms doctors
Mycin, an early form of AI known as an “expert system,” uses 450 if/then rules to diagnose bacterial blood infections. Developed by Stanford University researchers, Mycin is more accurate than medical students or practicing doctors.
1980: First neural network for computer vision
Japanese computer scientist Kunihiko Fukushima proposes the Neocognitron, an early computer-vision system modeled on the human visual cortex. The system is considered the first convolutional neural network, the basis for later advances in computer vision, text recognition, and natural language processing (NLP).
1986: Automated product configuration
Digital Equipment Corporation saves an estimated $40 million per year with one of the first business applications to use expert systems. DEC’s program, released in 1980 and called “R1,” could determine the most optimal product configurations for the company’s computer systems, relieving sales teams of manual analysis and other tasks.
1997: Deep Blue beats Kasparov
IBM’s supercomputer, Deep Blue, beats chess world champion Garry Kasparov in a pair of six-game matches. Capable of evaluating 200 million chess positions per second, Deep Blue validates the premise that computer systems trained on very specific tasks could outperform humans, and influenced machine learning techniques used in financial modeling, risk analysis, data mining, and other enterprise use cases.
2005: An early win for driverless cars
“Stanley,” an autonomous test vehicle designed by a team of Stanford researchers, wins a 132-mile driverless-car race sponsored by the Defense Advanced Research Projects Agency, or DARPA. The technologies (and engineering talent) showcased in the competition influence subsequent AV development at Google, Waymo, and other companies; related applications for the enterprise later emerge in manufacturing, mining, retail, and other sectors.
2012: Large-scale image recognition
ImageNet, a first-of-its-kind database of more than 3 million images developed by a team led by Stanford computer scientist Fei-Fei Li, helps train computers to understand visual information. The team launches a competition among researchers to expand the database to train their own computer-vision algorithms. The project paves the way for deep-learning advances in autonomous vehicles and facial recognition.
2013: NLP-powered product recs
A Google research team creates “word2vec,” a technique in natural language processing (NLP) that can learn the context of words in documents, including semantics and syntax. The system becomes the basis for subsequent business applications such as automating survey responses and recommending products.
2020: GPT-3 opens up language-based AI
Nonprofit research group OpenAI releases GPT-3, the world’s most powerful NLP model to date, enabling much more complex interactions with computers using language instead of code. Emerging enterprise applications of GPT-3 include near human-level writing, computer-generated code based on user descriptions, and using language to search databases.
Image credits: domin_domin/Getty Images (1974); zdyma4/Adobe Stock, Pogorelova Olga/Shutterstock (1980); Spencer Grant/Getty Images (1986); reshoot/Adobe Stock, creativeneko/Shutterstock (1997); various/Unsplash (2012); musicman/Shutterstock (2013); BGStock72/Adobe Stock, Early Spring/Shutterstock (2020); pluie_r/Shutterstock (hero image)