Machine learning and AI have huge potential to make human decision-making in government vastly more intelligent and efficient—and to automate old, manual processes that bottleneck many public agencies. Problem is, it’s not nearly as easy to be an early adopter in the government setting than it is in our app-driven consumer lives. That’s because decision-makers don’t yet have sufficient confidence in the underlying data and machine-generated outputs to trust that they—or the machines—will make the right calls.
Just about everywhere in private industry, machine learning has become a powerful new capability to help businesses make better decisions—whether it’s helping figure out whom to consider for a job, identifying unforeseen cybersecurity threats, replacing critical parts on trains before they fail on the job, or countless other applications.
In the realm of government agencies, however, adoption of machine learning and business process automation is off to a slow start. Many agencies still rely on spreadsheets and paper to manage key functions such as licensing and permitting. Even among peers, government workers say they’re behind the curve: 60% of managers at federal agencies say their organizations are behind other public agencies on AI adoption, according to a 2019 Government Business Council survey; 40% say their agencies have no plans to implement AI at all.
Several factors help explain the slow start. Some agencies are still using 1990s technologies or lack the funding to make upgrades. Others may have the right kind of IT architecture in place but lack the ability or tools to organize and normalize the massive data volumes you need to train machine learning models.
Risk, urgency, and confidence are the key variables that create trust in technology to support a human decision.
Leadership factors in, too. Most government workers are eager to give AI and machine learning a test drive. Yet in a recent Accenture survey of public sector employees, 75% say their bosses haven’t yet explained how AI applications will change their jobs.
The biggest roadblock to AI adoption in government is trust, especially when it comes to using machine learning algorithms to guide decision-making. Public agencies approach this idea very differently than, say, a consumer asking Siri for directions to a restaurant. For starters, some types of decisions in public agencies are strictly reserved for humans. For legal reasons, we can’t hand them off to machines even if that were possible. For example, only a federally sworn official can approve a contract for any good or service that obligates the government to spend public funds.
Trust metrics for decisions
At home, where voice assistants like Alexa and Siri acquire more useful features every day, there’s an interesting dynamic in play: We’re trusting what the machines tell us more and more, and checking their work less and less. We feel more comfortable basing our decisions on their information and recommendations.
In government agencies, machines aren’t getting the same benefit of the doubt because the trust isn’t there yet. I have a diagram called a trust matrix that I use to think through this. It’s a triangle that shows three key elements—risk, urgency, and confidence. These are the key variables that create trust in technology to support a human decision.
Each element influences the others: The higher the risk of making the wrong decision, the higher the confidence level must be in the underlying data used to make the decision. The level of urgency around the decision balances or competes with the level of risk. So in decisions that involve high risk but also high urgency, the urgency may force decision-makers to accept a lower confidence level in the information. Alternatively, risk concerns may trump urgency, leading decision-makers to go with the seemingly safer choice.
All these factors are in play when you’re stuck in traffic and running late for a meeting—and you pull up Waze on your phone and see a detour that promises on-time arrival. You’re dealing with considerable risk (possibly arriving even later if you commit to the detour); high urgency (the meeting starts in 15 minutes), and perhaps a medium level of confidence (you remember the time Waze left you lost on a fire trail). Most consumers don’t let these concerns stop them from using Waze and similar apps. Government managers, on the other hand, tend to obsess about them.
While the mass adoption of virtual assistants in government is still a ways off, there are many use cases emerging for machine learning in government, where machines can help manage trust dynamics and allow people to make better decisions more efficiently.
At the National Oceanic and Atmospheric Administration, engineers are using deep learning techniques to train computers to identify hurricanes from an overwhelming volume of weather and satellite data, and to make predictive forecasts about how they will behave—before and after they land.
Military readiness is another promising use case for AI. Before units are deployed to support a given operation, they must attain specific levels of readiness. Determining readiness takes into account a multitude of factors, from individual health and training to equipment readiness and the number of vehicles that are online or still in the maintenance shop.
Unit readiness reports are still generated manually today, which makes it difficult for U.S. Department of Defense leaders to know in real time what the readiness level is of all the units across the entire military.
That’s a job AI can handle well, since it’s good at if-then trend analysis and can answer the complex questions involved, such as: What is the right amount of time for which type of deployment? When units return, how long will it take to get refitted without additional resources? At the Pentagon’s new Joint Artificial Intelligence Center, launched in early 2019, staff are experimenting with AI not just for predictive maintenance and unit readiness, but for humanitarian aid and disaster relief, cybersecurity, and robotic process automation.
Machine learning and AI also have big potential in the federal budgeting process. Federal agencies are required to forecast their expenses five years—a process managed by the Office of Management and Budget. Most of this forecasting is done manually, but all of that calculus could be run through a machine learning algorithm—by program, by agency, and by legislation—to give government CFOs a holistic view of program costs versus incoming revenue. Massive amounts of waste can be eliminated if agencies begin to make this shift.
Getting public agencies on board with machine learning won’t happen overnight. The good news is that it won’t take huge capital investments to make it happen, and most of the data required is already available.
All it takes is trust. But even there, the metrics are promising: While there is clearly some risk involved, the urgency level rises every day. And the early success of these technologies in the private sector should give government decision-makers the confidence they need to dive in.