Organisations need responsible AI

With AI pervasive across work and life, building trust in its use is a business imperative

why ethical ai matters in business

Global advances in artificial intelligence (AI) and predictive analytics have shown better ways to get things done.

Millions of Aussies have embraced AI-based applications to make everyday tasks easier, faster, and more meaningful. This is especially true for the 40% of knowledge workers who have embraced hybrid work arrangements post-pandemic.

AI expert Catriona Wallace believes the pace of change will only accelerate. “Over the next few decades AI will become the most intelligent entity on the planet,” Wallace says. “We should be excited about this possibility, but conscious of the risks. Leaders need to act now, double down on responsible and ethical AI, and get diversity into the design and build of AI tools.”

A new set of rules

With Australia’s exponential uptake in digital services, policymakers and business leaders are currently playing catch-up to build trust in how AI is governed and used. Questions of integrity and responsibility loom large. Who’s making the rules for how AI is applied, and how do we hold them to account?

People want convenience, choice, and frictionless services, but not at the expense of fairness. They don’t want biased or opaque decision-making processes that can’t be understood or questioned.

As investment in digital transformation accelerates, ethical decision-making—and the technology that underpins it—presents a new layer of organisational responsibility for leaders.

Invisible AI

The average Australian already interacts with AI around 100 times each day, according to SurvivAI. However, there’s a trust gap between consumers and business leaders. According to a recent Accenture study, 96% of Australian executives say AI is becoming pervasive. Yet only 22% of Aussies trust how companies are currently implementing AI.

In a new report commissioned by ServiceNow, AI expert Catriona Wallace argues that when it comes to AI, employees and customers overwhelmingly favour organisations that actively practice ethics, transparency, and fairness.

Wallace predicts that in the next decade, we will interact with AI in almost every activity and function we perform, hundreds of times a day, even when we sleep. AI will be everywhere, all the time, often without us knowing.

As AI becomes omnipresent, Wallace identifies three priorities that will enable executives to realise its full potential, while reducing organisational risk.

The top 3 priorities for designing ethical AI

1. Ethics and diversity must be built into AI

Designers of AI systems know what data goes in and what answers come out. However, what happens in between is often a mystery. Hidden biases in the data can deliver results that are inaccurate, unethical, and even illegal.

Many companies struggle to quantify the ROI from AI governance measures like building fairness, gaining trust with employees and customers, and ensuring regulatory compliance. Wallace predicts that the risks of inaction are increasing and that Australian regulators will move more aggressively against irresponsible operators.

Faced with increased stakeholder pressure, organisations must develop responsible AI strategies that reduce the chance of causing unintended harm to employees or customers. These must be clearly articulated in company policies and deployed wherever AI is being used.

Time is ticking on transparency-as-a-choice. According to Forrester, this year will see more companies factoring ethical responsibility into employee and customer journeys.

“Voluntary guidelines like the Australian government’s AI Ethics Framework will soon be replaced with minimum required standards, and responsible use of AI will be required by law,” Wallace predicts. When this happens, responsible AI will join other risk and compliance topics as a board-level imperative.

2. Governance is key

Credential-based digital identity will be the catalyst for the open ecosystems, marketplaces, and platforms that will make up the next generation of digital citizen experiences, according to Victor Dominello MP, state minister for customer experience and digital government in New South Wales.

Speaking at ServiceNow’s recent Knowledge 2022 conference in Sydney, Minister Dominello stressed the urgency for citizen services to catch up with the consumer-grade experiences we interact with daily. “The biggest productivity play we have as a nation is getting digital identity sorted out for Australians. We’re still mucking around with paper and plastic cards. To do that, trust in how data is secured and managed is critical.”

Australia’s public healthcare system is a case in point. For example, New South Wales Health is currently digitising the patient referral process. “Typically, patients will see their GP, who will then refer a patient to a specialist,” explains NSW Health CIO Zoran Bolevich. “These processes are still somewhat paper-based, and we’re trying to digitise them and turn these referrals into a digital workflow and have that managed in a safe, fast, more effective way.”

3. Meet people where they are

Australians generally want speed, transparency, and a personalised approach when resolving customer service issues, according to a 2021 ServiceNow survey. However, Wallace’s analysis finds two new mentalities are emerging. ‘Digital Experiencers’ will embrace technology with few limits. ‘Organic Experiencers’—roughly 25% of the population—will demand more choice in how they interact with brands and employers. This group will reject digital-only models, preferring to pick and choose between touchpoints based on the task at hand.

This divide means business and government will need to design products and services that cater to both groups.

Energy Queensland is a power utility that manages 247,000km of electric network for more than two million customers. This requires a seamless flow of real-time information across the organisation. Disconnected, home-grown systems meant employees were wasting valuable time waiting for decisions, actions, and responses, causing bottlenecks and eroding trust.

When the highly regulated utility created a digital strategy, the mission was clear: “We wanted to make it simple for our people, 70% of whom are field workers. Whether they were in the office lodging a request with HR or out on site placing an order with the procurement team, we needed a consistent experience,” says Kirby Lanagan, manager, Enterprise Service Management.

But that didn’t mean forcing every employee into digital channels. “We made sure people could still call into the helpdesk and waited for them to adapt to the new app in their own time,” Kirby says.

As a result, employees have gained back critical time to focus on higher value tasks, and can rapidly access data to meet regulatory requirements more efficiently.

Responsible AI becomes business strategy

Employees and customers will increasingly decide which brands they engage with based on responsibility standards set out by leaders. By focusing on the ethical design and delivery of products and services today, organisations can become more agile and adaptive while staying ahead of the regulatory curve.

Forward-thinking firms will invest in responsible AI, in fair business practices that meet stakeholder expectations, and in systems that empower stakeholders with greater choice in how and when they interact with brands.