To avoid getting caught flat-footed, executives need to start thinking about what AI governance looks like in their own businesses.
By John Castelly, chief ethics & compliance officer, ServiceNow
Every business leader knows they need generative AI. Nearly 80% consider it to be the top emerging technology of the next few years, according to KPMG. But few are embracing AI governance with the same fervor.
As the Biden administration’s recent executive order underscores, governments are getting serious about regulating AI. The order aims at promoting the “safe, secure, and trustworthy development and use of artificial intelligence” and directs the National Institute of Standards and Technology (NIST) to develop guidelines and standards to make AI adoption more ethical and effective. Meanwhile, the European Union passed the comprehensive EU AI Act that classified AI systems according to the risk they pose to users and directed companies to mitigate those risks.
This is a cycle we’ve seen before with environmental, social, and governance (ESG) issues and with emerging technologies like cryptocurrency. Regulation begins slowly and amorphously before governments establish a coherent approach. But just because there’s historic precedent doesn’t mean we’re ready for what’s coming. To avoid getting caught flat-footed, executives need to start thinking about what AI governance looks like in their own businesses.
Related
Related