As you start to use AI and ML to identify sets of personal data, the ethical challenges start to arise.
By Evan Ramzipoor, Workflow contributor
Data privacy is a moving target. Organizations are taking in increasingly vast amounts of data—on customers, partners, users, vendors, employees. At the same time, local, national, and international laws governing data privacy and protection are changing almost constantly.
Against this backdrop, Mark Cockerill, vice president of legal in EMEA and head of global privacy at ServiceNow, says we lack a global consensus on what data privacy is and how to approach it. “People are shaped by their experiences,” he says, “so various countries and regions have different ideas of how important data privacy is, or what it even means.”
However, security teams and analysts do agree that companies must invest in a data privacy infrastructure that incorporates “privacy by design.” Rather than taking in data and attempting to secure it post hoc, privacy by design builds data privacy protection into the data collection process.
To that end, organizations are using tools like artificial intelligence (AI) and machine learning (ML) that parse and secure vast quantities of data as it’s collected. But how does that work? Why does it matter? When machines come in contact with human data, what ethical questions are raised?
Workflow Guide
Related