Consumer-facing AI applications such as Netflix’s or Amazon’s recommendation engines train their machine learning solutions using large data sets, often involving hundreds of millions of users. They rely on complex and often nontransparent algorithms trained once or periodically on large data sets to recommend programs or products.
Enterprise AI, by contrast, often deals with problems that have more limited data sets to train the AI, such as improving the experiences of employees. The issues it solves are more nuanced. They tend to be specific to an industry or even a single company.
The needs of a utilities company’s customer-service chatbot, for example, will likely not overlap much with those of a healthcare organization’s chatbot. Hence, an enterprise AI system must be able to quickly and efficiently learn to understand the nuances of a given application and industry, even with small quantities of data.
These purpose-built, “low-data learning” AI solutions are typically deployed in tactical settings, such as decision support, and must conform to high standards of robustness, interpretability, and reproducibility to build trust among the decision makers they support.
Enterprise AI applications must often meet strict regulatory requirements, especially if they rely on consumer data, influence hiring decisions, or are deployed in sensitive industries such as healthcare and financial services. Increasingly, these AI systems are expected to be transparent and explainable, so that users can understand how predictions are made or decisions reached.