AI's built-in bias has ethical and political implications

Be the first to comment

Algorithms are not moral; they're instruments that tie inputs and outcomes in a way defined by us, or their own data-based learnings. What do they mean for brands?

The various industrial and technological revolutions that have bounced us along over the years have all been driven by exponential economic growth – a power that emanates from the crucible in which scientific insight, engineering expertise and the visions of the great creative minds combine and fizz together.

But there’s always a hangover from rapid progress. Always.

Whether we’re talking about rural communities being decimated by farm automation or the shift to driverless cars killing the livelihoods of a few million cab drivers, progress, driven by technology, has a shadowy underbelly.

It’s tough in our business to discern the difference between the novel and the profound. There’s always something new to prompt a sale, but sometimes the really profound shifts are missed. The wood versus the trees. Is machine learning just a fad, or a full-blown revolution with the magnitude to match social media or mobile?

If we look at AI’s strengths, we see the same benefits and matching criteria in previous seismic shifts: it makes really hard things easy, presents large gains in quality of experience for customers and economic efficiencies for business. Just like the industrial revolution and the technological one.

Turning point

Up until this point, whether we’re talking about steam engines or drones, the underlying concept of this technology has remained consistent. They are all extensions of human will. The machines and technologies are the executors of the things our bodies are either too weak, small, clumsy, impatient or stupid to perform directly. The car, washing machine, plane, combine harvester, tank – all follow this idea.

But now we have a new concept. 

This new concept accepts a new truth that has emerged with the growing sophistication of technology, right across the board. The realisation that our ability as humans to process complex decisions is now a huge limiting factor in our collective progress. Progress toward an even more frictionless system that serves the needs of consumers, society and the economy. 

Whether it’s about the economics of replacing costly humans in Uber’s many Prius battalions or serving up the best possible recommendation to each viewer from Netflix’s vast catalogue, the huge wheels of progress are turning faster and faster and they spin only in one direction. 

"The word algorithm rarely comes up, but didn’t algorithms recently elect the US president?"

The benefits are huge. Whether algorithms are spotting fraudulent patterns in bank-card use, matching facial recognition to purchase preferences, or helping insurance firms optimise risk-assessments through complex relational data, the aggregate effect on efficiency and, ultimately, profit are magnificent – and the bigger the enterprise, the greater the effect.

It’s easy to see this as merely an incremental step in our collective evolution; a logical progression from "labour-saving" automation and the limitless scalability of these processing machines that manifest in software.

But there is a huge difference between the drone on the end of a remote control and one with autonomy. There’s a huge difference between a car with a human driver behind the wheel and one driven by an algorithm, especially when it has to decide who dies in a no-win collision. 

We’ve all heard the AI dilemma about "Who should the car kill, the youth in the road or yourself, the driver, if given no other choice?" It gets to the root problem with decisions derived from fallible, instinct-led human beings versus the playbook of algorithmic determination. 

If the algorithm states "Kill the driver instead of the youth", every scenario will end with a dead driver. The algorithm therefore can’t argue from the same basis as the human driver, with their individual confluence of their unique background and the particulars of the situation. The machine doesn’t have that excuse, because the outcome was pre-written in the system, by a human, or maybe learned from existing data. 

It’s why this scenario demonstrates how AI is not just ethical, but political. Will governments insist that AI algorithms always kill the driver in this scenario? If so, which way will you vote? And when the drivers get killed one after another, will it be the prime minister’s fault? And will you then move to France because their political leader insists it’s the youth who should always take the hit?

Algorithms are not moral. They are instruments that tie inputs and outcomes in a way defined by humans, or their own learnings based on data. And now, because of processing power and the falling cost of technology, AI is rapidly becoming commonplace. In fact, we’ve all almost certainly experienced AI in some form or other today, whether it was through a personalised search, feed filtering, our Discover Weekly Spotify playlist or looking for an insurance quote. In most cases, however, we were probably blissfully unaware – in keeping with the theme of seamlessness and convenience that we recognise from previous revolutions. 

Here’s the rub: yes, this is just another step in a long legacy of increases in speed, efficiency and ease, but something altogether different is now happening, hidden behind the screen. 

This convenience is now being driven by connecting lots and lots of dots and making lots and lots of assumptions – and, yes, making some quite big decisions, without any human intervention, other than in setting up and writing the rules. These are decisions that could make dramatic differences in what you pay for goods and services; decisions you are invited to participate in and experiences that you may or may not be selected to witness.

And while the word algorithm rarely comes up at dinner parties, its effects are ever-more profound. I mean, didn’t algorithms recently elect the US President? Could you have a bigger consequence for a new phenomenon? It took Facebook chief executive Mark Zuckerberg a while to take responsibility for the proliferation of fake news on the site, but the smokescreen didn’t last. He changed his tune and accepted some responsibility. 

And rightly so. Because they are created by humans and are the product of the human intellect, these computational rules have natural biases. In the pursuit of highly relevant content, Facebook has inadvertently created the ultimate self-propagating propaganda machine. The individual pieces of content might not be Facebook’s responsibility, but the aggregate effect is impossible to hold at a comfortable distance, as it’s the product of the system’s design.

Who should the car kill, the youth in the road or yourself, the driver, if given no other choice?

Algorithms are used to spot people shopping around for insurance quotes and signal to providers to increase prices. The same insurers either knowingly or naïvely spot and then systematise gender and race biases that are then reflected in these same quotes.

You see, the problem with machine learning is that it needs a frame to work within. Just pointing it at data doesn’t provide any ethical basis – merely a pattern that represents the optimal potential from the way things are, the way things happen already, with any inherent problems, biases and falsehoods that exist within that system. So while machines might learn more efficient ways to correlate data and then drive decisions about how those insights are turned into actions (such as insurance quotes), there is a huge related danger that these biases are henceforth amplified and reinforced.

As Microsoft Research principal researcher Danah Boyd wrote in a blog post (bit.ly/autobias) on "guilt through algorithmic association": "You’re a 16-year-old Muslim kid in America. Say your name is Mohammad Abdullah. Your schoolmates are convinced that you’re a terrorist. They keep typing in Google queries likes ‘is Mohammad Abdullah a terrorist?’ and ‘Mohammad Abdullah al Qaeda’. Google’s search engine learns. All of a sudden, auto-complete starts suggesting terms like ‘Al Qaeda’ as the next term in relation to your name. You know that colleges are looking up your name and you’re afraid of the impression that they might get based on that auto-complete. You are already getting hostile comments in your hometown, a decidedly anti-Muslim environment. You know that you have nothing to do with Al Qaeda, but Google gives the impression that you do. And people are drawing that conclusion. You write to Google but nothing comes of it. What do you do?"

Approach with caution

For brands and businesses, this presents a dilemma. How to harness the power of AI, without getting sucked into a damaging ethical crisis or a political shitstorm.

The answer, of course, is to approach with caution and not overestimate the degree to which machine learning can replace the conceptual frameworks of meaning, values, ethics and so on, versus those structures and connections that exist within the raw data. 

To embrace the power of AI we have to cede many decisions to the system in order to reap the rewards. But we need to discern those decisions that need to stay with us, the squashy, fallible human ones. And while we’re in an industry where the raison d’être is to influence customers, there’s a fine line between this influence and the more dubious forces of manipulation and exploitation that some of this new technology and methodology affords. We need to check our moral compasses more than ever while we enjoy the thrills of this voyage of discovery.   

Nicolas Roope is co-founder and creative partner at Poke.