Experts are concerned about the potential bias and fairness issues that arise when AI-driven technology makes financial decisions.
By Evan Ramzipoor, Workflow contributor
Financial institutions are quickly adopting AI to expand access to less wealthy, nontraditional customers. However, with this technology come questions about whether AI is inadvertently introducing biases into financial decision-making.
In response, AI researchers have designed guidelines, recommendations, checklists, and other frameworks to ensure the use of AI in finance is fair to customers. Yet that effort has revealed that defining fairness is exceedingly hard. Instead, organizations have focused more on reducing potential harms caused by AI and less on eliminating bias entirely.
The business potential is vast. In the United States, about 1 in 4 Americans is underbanked and can’t apply for traditional loans. In Mexico, two-thirds of adults don’t have a bank account. Across the African continent, most people don’t have a bank account or credit score.
These communities are often referred to as “underbanked” because they can’t access services like mortgages and credit cards that richer consumers take for granted. Such people “lack the traditional identification, collateral, or credit history—or all three—needed to access financial services,” says Margarete Biallas of the International Finance Corporation, a member organization of the World Bank.