AI is moving fast—so fast that companies might be deploying it before setting up the controls that help manage the risks. As a general counsel, I’ve experienced firsthand how AI can drive efficiency, unlock new opportunities, improve experiences, and create value. But I also understand how quickly things can go wrong when governance is an afterthought.
AI isn’t just another software tool—it’s a business imperative and a powerful asset that’s changing the way we work. But if you don’t manage it properly, it’s also a business risk, a potential legal minefield, and a reputational time bomb. AI is new, and we don’t yet have solid standards about how to manage the governance. But here are some things every company should be thinking about to govern AI responsibly without stifling innovation.
AI can’t be everyone’s responsibility because that means it’s no one’s responsibility. Assign clear ownership. And remember that the team developing the AI applications is the first line of defense in managing the risk. Product leaders can’t delegate those governance responsibilities away.
Key considerations:
- Create an AI governance team with product, legal, compliance, IT, and business leaders.
- Assign an owner of your AI governance program (chief AI officer, general counsel, head of compliance).
- Set up a reporting process. If AI makes a mistake, who owns it? Who fixes it? Who answers to regulators?
Why it matters: AI can make decisions that impact customers, employees, and regulators. If no one owns governance, problems will slip through the cracks and small risks can turn into big problems.
Regulators are catching up to AI’s risks, fast. You don’t want to be caught off guard.
Key considerations:
- Stay on top of laws such as the European Union’s (EU) AI Act and General Data Protection Regulation (GDPR), the U.S.’ AI Executive Order, and China’s Cybersecurity Law and New Generation Artificial Intelligence Development Plan
- Know your industry’s specific AI risks—finance, healthcare, HR, and consumer tech all have different rules.
- Keep audit logs for AI decision-making. You’ll need them if regulators come knocking.
Why it matters: AI regulations are evolving, and penalties for noncompliance can be severe (think GDPR-level fines).
AI makes decisions at scale. If those decisions are biased, your company will make bad decisions with big consequences.
Key considerations:
- Test AI models for bias and fairness before launching them.
- Use a wide set of training data. If your AI is learning from a narrow data set, expect poor results.
- Set up human oversight for AI decisions, particularly when they affect employment, lending, or customer support.
Why it matters: AI bias isn’t just a legal risk—it’s a brand risk. You don’t want to be the company making headlines for AI that’s gone wrong.
People don’t trust what they don’t understand. If AI is making decisions, make sure you can explain them.
- Use explainable AI (XAI). Customers and regulators will ask, “What decisions were made by AI, and what were the bases for those decisions?”
- Disclose AI usage. If AI is being used, people should know, especially when AI is deciding anything that could impact a person’s job or finances.
- Give users a way to challenge AI decisions just they can other decisions—because mistakes happen.
Why it matters: Regulators are pushing for explainability and user control. Get ahead of it now.
AI is only as good as the data it learns from—and that data is a huge security risk.
Key considerations:
- Make sure AI systems comply with the GDPR, California Consumer Privacy Act (CCPA), and other privacy laws
- Encrypt AI data and control access. Don’t let internal teams pull AI insights without oversight.
- Set clear data retention and deletion policies. Don’t hoard customer data you don’t need.
Why it matters: AI data breaches will cost you big in terms of fines, lawsuits, and lost customer trust.
AI isn’t a set-it-and-forget-it tool; it needs constant monitoring.
Key considerations:
- Run regular audits. AI models drift over time, leading to bad decisions.
- Set up real-time monitoring. Track AI performance and flag anomalies.
- Create an AI incident response plan. What happens if AI makes a catastrophic mistake?
Why it matters: AI is dynamic. If you don’t monitor it, what worked yesterday could fail today.
If AI is part of your business, it should reflect your company’s values.
Key considerations:
- Use AI responsibly. Don’t prioritize automation over fairness.
- Make sure AI is making decisions your company is proud of.
- Keep customers, employees, and regulators in the loop about how AI impacts them.
Why it matters: AI is shaping your company’s reputation. Make sure it’s working for you, not against you.
It might be easy to see AI governance as a roadblock, but the best companies see it as a competitive advantage. Responsible AI isn’t just about avoiding lawsuits; it’s about building trust, improving performance, and staying ahead of the curve.
You don’t need to achieve perfect AI governance overnight, but you do need to start now. Follow these steps, stay informed, and make AI accountability a core part of your business strategy.
Find out how ServiceNow helps put responsible AI to work for people.