Can AI be virtuous?

A conversation with Harvard ethicist Jessica Fjeld

Businesses and governments around the world face a complex challenge: How should they implement artificial intelligence (AI) in ways that respect human rights, avoid bias, incorporate diverse perspectives, and yield safe, socially beneficial products?

Jessica Fjeld AIIn recent years, numerous AI “principles documents” have emerged from governments, private companies, and advocacy groups. Researchers at Harvard’s Berkman Klein Center for Internet & Society studied 36 of these documents. They found significant consensus around core issues such as privacy, transparency, and bias. In January they published their findings in a white paper and in a graphic model of ethical AI principles.

Workflow sat down with Jessica Fjeld, the study’s lead author and assistant director at the Berkman Klein Center’s Cyberlaw Clinic, for her thoughts about moving socially responsible AI from theory to practice.

What was your biggest takeaway from the study?

That there was such a convergence around general themes. When we started, there were principles documents coming out hot and heavy from the private sector, governments, and advocacy groups. It wasn’t clear where the common threads were. The meta-chatter was that responsible AI was not a field anyone had figured out yet.

On what issues was there the most convergence?

It was surprising to see so much recognition that there should be accountability for AI, especially from the private sector. We also collected information on whether the documents referenced human rights. Our hypothesis had been that governments would be more likely to reference human rights and private sector organizations less likely. We were wrong.

How did the private sector prove you wrong?

It’s been increasingly clear that the tech sector has a strong impact on human rights. Tech companies are starting to embrace that and internalize those functions. We’re starting to see the results of several decades of organizing on digital rights and privacy now informing how companies are approaching AI.

 

How are companies starting to put ethics into practice?

There is a lot of work to be done, but I see some organizations using AI impact assessments as a way of ensuring new services are responsible and respect human rights. For example, Google recently released its first facial recognition tool, which allows enterprise customers to recognize the faces of famous people in press photos.

Google did a human rights impact assessment before releasing the technology, and released a summary of that assessment with the product announcement. Impact assessments like that are going to be huge in terms of helping organizations get their arms around what the challenges are, because the impacts will vary tremendously.

What goes into an impact assessment?

Typically, organizations bring in outside consultants. A consulting firm called Business for Social Responsibility (BSR) did the Google assessment. They come in and gather information via documents and interviews. Then they analyze the relevant legal issues and produce a report that executives use to make actionable decisions.

How can other companies do this on their own?

For startups, it can be a big challenge. Academics can be a good resource for early stage startups, particularly ones that have a strong social justice or social benefit mission. Last year we ran a program called the Challenges Forum where we invited organizations of all sizes to present their ethical issues with AI to panels of scholars and other experts.

How can companies maintain transparency in AI development?

One aspect is transparency about the use of AI tools. People want to know when they are interacting with a chatbot versus a human. They also want to understand when AI is being used to make decisions that affect them—for example, an algorithm that decides whether or not to approve a loan.

The second aspect is transparency of the tools themselves. Whether they’re building or acquiring AI tools, companies should demand that they’re embedded with code that allows engineers to understand the decisions the tools make and the data that informs those decisions.

How confident are you that companies will always put principle over profit when it comes to AI?

I trained as a corporate lawyer, so I know that corporations have a duty to their shareholders to maximize profit. I think the more important question is, on what timeframe is that duty relevant?

Corporations often feel short-term pressure to maximize profit. This can lead them to deprioritize concerns like ethical conduct and building trust with customers. However, I also see more organizations emphasizing ethics as part of business strategy.

If we’re going to succeed long-term, it won’t be because it’s an oppositional choice—either shareholder profits or ethical AI—but because organizations recognize that responsible AI is best for their shareholders as well as for the planet and for humanity.