Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
machine learning artificial intelligence

Technology

How to avoid the ethical pitfalls of artificial intelligence and machine learning

IbyIMD+ Published 4 June 2021 in Technology • 9 min read

The modern business world is littered with examples of organisations that hastily rolled out AI and ML solutions and other automated decision-making without due consideration of fairness, accountability and transparency of applications and outcomes. Even where not illegal, failure to consider social responsibility, and sensitivities and concerns of many citizens, has led to very costly and painful learning lessons and abandoned projects. 

Internationally, for example, IBM is getting sued after allegedly misappropriating data from an app while Goldman Sachs is under investigation for using an allegedly discriminatory AI algorithm. A closer homegrown example was the Robodebt debacle, in which the Federal Government employed an ill-thought-through algorithm to try and recover social security overpayments dating back to 2010. The government settled a class action against it late last year at a reported eye-watering cost of $1.2 billion after the algorithm inadvertently targeted many legitimate social security recipients.

“Robodebt, as implemented, was clearly illegal,” says UNSW Business School’s Peter Leonard, a Professor of Practice for the School of Information Systems & Technology Management and the School of Management and Governance at UNSW Business School, and chair of the Australian Computer Society’s AI Ethics Committee. Government decision-makers were required by law to exercise discretion taking into account all relevant available facts, and Prof. Leonard says the blanket application of the algorithm to factor scenarios where it generated incorrect inferences meant they weren’t properly applying discretions as an administrative decision-maker. 

When things go wrong with data, algorithms and inferences, they usually go wrong at scale.
When things go wrong with data, algorithms and inferences, they usually go wrong at scale.

“The algorithm was only part of the problem: the bigger part of the problem was that human decision-makers did not properly appraise themselves of its limitations, and placed excessive reliance upon the machine outputs that were reliable in many contexts, and quite wrong in others,” he says.

Robodebt is an important example of what can go wrong with systems that have both humans and machines in a decision-making chain, Prof. Leonard explains. “We are spending a lot of time thinking about how, when and why data and algorithms may produce outputs that are not fair, accountable and transparent. This is important work, but we are not investing enough time in thinking about assurance and governance to ensure reliable decision-making by humans that depend upon data and algorithms.

“We need to be especially careful to ensure that automation applications that generate inferences that are used to determine outcomes that significantly affect humans or the environment, are only used in contexts where those inferences are fair, accountable and transparent. Too often we discover that the system works fine in the middle of the Bell curve, but fails where it can cause the most harm, to vulnerable individuals and minority groups.”

Prof. Leonard affirmed good assurance and governance can ensure this does not happen – but when things go wrong with data, algorithms and inferences, they usually go wrong at scale. “When things go wrong at scale, you don’t need each payout to be much for it to be a very large amount when added together across the cohort, as with Robodebt,” he says.

Balancing compliance with innovation

Technological developments and analytics capabilities will usually outpace laws and regulatory policy, audit processes and oversight frameworks within organisations, as well as organisational policies around ethics and social responsibility. Systems often fail when risk is evaluated and managed within components, without due consideration of how the components come together and operate to produce outcomes in a variety of facts scenarios. New integrations of humans, data and algorithms, and increased reliance upon inferences, create unfamiliar risks of harm to affected individuals, Prof. Leonard says.

Because those risks are unfamiliar and risks often arise (or are amplified) by human-machine interactions, Prof. Leonard says established governance and assurance frameworks for organisational decision-making often fail to identify, or misdiagnose, risks – leading to defective risk mitigation. Furthermore, there is often confusion within organisations as to which of the data scientists, technology specialists or other business executives are responsible for identifying, mitigating and managing which risks.

As such, new frameworks for assurance and governance of AI, ML, data and algorithms need to be articulated and aligned with established governance within organisations, Prof. Leonard says. “And that this all needs to happen quickly, because otherwise organisations will move fast and break things, and things that they break will include vulnerable people and the environment. But innovation should not be unduly impeded; assurance systems should accommodate agile start-ups, and this field should not be allowed to become the sole province of large risk assurance consultancies assisting large organisations.” 

Why translational work is required

There is major “translational” work to be done in aligning developing methodologies for AI and ML assurance to risk management processes that are appropriate for a range of organisations and their diverse capabilities. This translational work must be multidisciplinary and agile. “There’s still a very large gap between government policymakers, regulators, business, and academia. And I don’t think there are many people today bridging that gap,” Prof. Leonard observes. 

“It requires translational work, with translation between those different spheres of activities and ways of thinking. Academics, for example, need to think outside their particular discipline, department or school. And they have to think about how businesses and other organisations actually make decisions, in order to adapt their view of what needs to be done to suit the dynamic and unpredictable nature of business activity nowadays. So it isn’t easy, but it never was.”

Prof. Leonard says many organisations are “feeling their way to better behaviour in this space”. He thinks that many organisations now care about consequences of poor automation-assisted decision-making, but they don’t yet have capabilities to anticipate when outcomes will be unfair or inappropriate or don’t have the feedback and assessment mechanisms in place to promptly identify and remedy systems creating unfair or inappropriate outcomes. “Many executives I speak to will happily promote high-level statements of social responsibility or ethics – call them what you will – but then fail in implementation of those principles across an end-to-end decision system,” he says.

Data privacy assurance serves as an example of what should be done in this space. Organisations have become quite good at working out how to evaluate whether a particular form of corporate behaviour is appropriately protective of data privacy rights of individuals. “Privacy impact assessments” are conducted by privacy officers, lawyers and other professionals who are trained to understand whether or not a particular practice in collection and handling of personal information about individuals may cause harm to those individuals,” says Prof. Leonard.

“This provides an example of how a pretty amorphous concept – a privacy right – can be protected through use of a problem-specific risk assessment process that leads to concrete privacy risk mitigation recommendations that an organisation should implement.”

Bridging functional gaps in organisations

It is important to reduce disconnects between key functional stakeholders who need to be involved in assuring fair, accountable and transparent end-to-end decision-making. These disconnects appear across many industry sectors. One example is digital advertising services. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology which are, in turn, managed by a technology team. Separate to this is data privacy, which is managed by a different team. Prof. Leonard says each of these teams don’t speak the same language as each other in order to arrive at a strategically cohesive decision.

Some organisations are addressing this issue by creating new roles, such as a chief data officer or customer experience officer, who are responsible for bridging these functional disconnects. Such individuals will often have a background in or experience with technology, data science and marketing, in addition to a broader understanding of the business than is often the case with the CIO.

“We’re at a transitional point in time where the traditional view of IT and information systems management doesn’t work anymore, because many of the issues arise out of analysis and uses of data,” says Prof. Leonard. “And those uses involve the making of decisions by people outside the technology team, many of whom don’t understand the limitations of the technology in the data.”

Why government regulators require teeth

Prof. Leonard was recently appointed to the NSW inaugural AI Government Committee – the first of its kind for any federal, state or territory government in Australia, to deliver on key commitments in the state’s AI strategy. A key focus for the committee is ethics in AI. Prof. Leonard is critical of governments that publish aspirational statements and guidance on ethical principles of AI, but fail to go further.

He gave the example of Minister for Industry, Science and Technology Karen Andrews’ announcement on ethics principles around artificial intelligence. “This statement was published that more than 18 months ago,” he says. “I look at that, and say, ‘What good is that?’ It’s like the 10 commandments, right? Yes, they’re a great thing. But are people actually going to follow them? And what are we going to do if they don’t?” 

Prof. Leonard believes it’s not worth publishing statements of principles unless they go down to the harder levels of creating processes and methodologies for assurance and governance of automation applications that include ‘true’ AI and ML, and ensure that incentives within organisations are aligned to ensure good practice. Some regulation will be needed to build the right incentives, but he says organisations need to first know how to assure good outcomes, before they are legally sanctioned for bad outcomes.

Organisations need to be empowered to think their way through issues, and Prof Leonard says there needs to be adequate transparency in the system – and government policy and regulators should not lag too far behind. A combination of these elements will help reduce the reliance on ethics within organisations internally, as they are provided with a strong framework for sound decision-making. “And then you come behind with a big stick if they’re not using the tools or they’re not using the tools properly. Carrots alone and sticks alone never work. You need the combination of two,” says Prof. Leonard.

Risk management, checks and balances

A good example of the need for this can be seen in the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry. It noted key individuals who assess and make recommendations in relation to prudential risk within banks are relatively powerless compared to those who control profit centres. “So, almost by definition, if you regard ethics and policing of economics as a cost within an organisation, and not an integral part of the making of profits by an organisation, you will end up with bad results because you don’t value highly enough the management of prudential, ethical or corporate social responsibility risks,” says Prof. Leonard. “You name me a sector, and I’ll give you an example of it.”

While he notes larger organisations will often “fumble their way through to a reasonably good decision”, another key risk exists among smaller organisations. “They don’t have processes around checks and balances and haven’t thought about corporate social responsibility yet, because they’re not required to,” says Prof. Leonard. Small organisations often work on the mantra of “moving fast and breaking things” and this approach can have a “very big impact within a very short period of time” thanks to the potentially rapid growth rate of businesses in a digital economy.

“They’re the really dangerous ones, generally. This means the tools that you have to deliver have to be sufficiently simple and straightforward that they are readily applied, in such a way that an agile ‘move fast and break things type-business’ will actually apply them and give effect to them, before they break things that really can cause harm,” he says.

This article is republished with permission from UNSW BusinessThink, the knowledge platform of UNSW Business School. You may access the original article here. 

Authors

Peter Leonard

Peter Leonard

Professor of Practice, School of Information Systems & Technology Management and the School of Management and Governance, UNSW

Peter Leonard is a data and technology business consultant and lawyer. He is principal of Data Synergies, a business and legal consultancy, Professor of Practice at UNSW Business School, and consultant to Gilbert + Tobin Lawyers. He focuses on business transactions with significant data, regulatory and cross-border complexities, often working as counsel assisting other law firms, consultancies and in-house counsel to plan, structure and manage complex deals and projects.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience