Open your eyes – your AI is biased

Computations have no ethical subroutine. And understanding bias in AI is an important eye opener. Building an AI-facilitated future without properly understanding the algorithms behind the conclusions and actions is leading us to into unexpected pitfalls.

We are all very excited about machine learning and AIs. We see them as the ultimate way of automating daily life from driverless cars to personal health and medical diagnostics. But garbage in = garbage out. And to eliminate the garbage we need to be able to identify it. Long after our little helper has started working.

The main reason we need to watch out is that AI algorithms are not necessarily retraceable and retrackable. Not even the programmer understands it fully once the machine starts accumulating and filtering data. Despite its ability to learn it can only conclude based on the original assumptions built into the underlying algorithms.

vlcsnap-2011-01-17-16h21m27s224-1024x422

Looking for the ultimate answer

Whenever we rely on algorithms to make decisions – or at least recommendations – it is because we seek a simple answer to a complex question.

If the collection of big data for data driven decision making is used to create simple answers to complex questions, the complexity is solved through algorithms that in effect filter and collate based on what the human programmer considered applicable. And it concerns us more than you would expect. I recently read that the AI concept is being used within the US judicial system: Judges rely on the AI’s suggestion on whether an inmate should be granted parole based on assumptions of future behavior of set individual after release. In isolation this would seem like a statistically viable method, as there will be vast amount of available data to substantiate the conclusion.

But if the original algorithms input by a human were in fact influenced by bias such as race, name, gender, age etc., are the conclusions any better than the answer 42?

When Douglas Adams in his science-fiction Hitchiker’s Guide to the Galaxy series introduced Deep Thought, the biggest  computer ever built in the history of men and mice, the builders asked for the answer – and added that they wanted it to be nice and simple. So after millions of years Deep Thought concluded that the answer to life the universe and everything was 42. But by now, this insight was useless because nobody really understood the question.

If we see AI and machine learning as the ultimate answer to complex scenarios, then we must be able to go back to the original question in order to be able to process the answer. Not just to understand but to analyse and apply what the computer is missing – the ethical subroutine.

What will the AI choose in a no-win scenario?

One of the hot topics in the current discussion around self driving cars is whether the AI would make proper ethical decisions in a no-win scenario. Should it risk the life of the passenger by veering off the street and over a cliff to avoid running over another individual in in the street? The decision would be entirely based on the original algorithms which overtime have become inscrutable even for the engineers themselves.

Of course, this is a simplified example. An AI, as opposed to a human behind the wheel, would be able to process more details regarding the potential outcome of either option. What would the statistical probability of successfully avoiding hitting the person on the street be when taking into account elements such as speed, space available without going over the cliff, the chance of the person acknowledging the danger and moving out-of-the-way in time before the collision etc.

self-driving-car-drive-ix-625-u

(Image from Nvidia Marketing Material)

But the self preservation instinct of a human being behind the wheel would most likely lead to the obvious conclusion: Hitting the person is preferable to dying by plunging over the cliff! Would the original programmer not have input exactly this type of bias?

What I believe Douglas Adams was getting at with the magic number 42 was that there is no simple answer to complex questions. If as indicated above the AI is victim to its own programming when making complex decisions or recommendations, then as a tool we must make it as transparent and thereby manageable as any tool developed by humans since the invention of the wheel.

MIT Technology Review addressed this in detail in the article published by Will Knight in April 2017  The Dark Secret at the Heart of AI 

No one really knows how the most advanced algorithms do what they do. That could be a problem.”

willKnightHe goes on to explain that while mathematical models are being used to make life changing decisions such as who gets parole, who gets a loan in the bank, or who gets hired for a job, it remains possible to understand the reasoning. But when it comes to what Knight calls Deep Learning or machine learning, the complexity increases and the continuosly evolving program eventually becomes impossible to backtrack even for the engineer who built it.

Despite the inscrutable nature of the mechanisms that lead to the decisions made by the AI, we are all too happy to plunge in with our eyes closed.

Later the same year another MIT Technology Review article explores the results of a study of the algorithms behind COMPAS (Inspecting Algorithms for Bias ) COMPAS is a risk assessment software which is being used to forecast which criminals are most likely to reoffend.

Without going into detail – I highly recommend you read the article – the conclusion was that there was a clear bias towards blacks. The conclusions later turned out to be incorrect assumptions: Blacks were expected to more frequently reoffend, but in reality did not. And vice versa for the white released prisoners.

The author of the article, German journalist Matthias Spielkamp, is one of the founders of the non-profit AlgorithmWatch which has taken up the mission to watch and explain the effects of algorithmic decision making processes on human behaviour and to point out ethical conflicts.

Spielkamp

Mattias Spielkamp, Founder of AlgorithmWatch

The proverbial top of the iceberg

Even strong advocates of applying artifical intelligence/cognitive intelligence and machine-learning (deep learning) to everyday life applications, such as IBM with its Watson project, are aware of this threat and use strong words such as mitigation to explain how this potential outcome of widespread use of the technology can be handled better.

In a very recent article published February 2018 entitled  Mitigating Bias in AI models , Ruchir Purri, Chief Architect and IBM Fellow, IBM Watson and Cloud Platform stresses that “AI systems are only as effective as the data they are trained on. Bad training data can lead to higher error rates and biased decision making, even when the underlying model is sound… Continually striving to identify and mitigate bias is absolutely essential to building trust and ensuring that these transformative technologies will have a net positive impact on society.”

IBM is undertaking a long range of measures to minimize bias but this is only addressing the top of the iceberg. The real challenge is that we are increasingly dehumanizing complex decisions by relying on algorithms that are too clever for their own good.

Actually – all of this isn’t exactly news.

More than 20 years ago, human bias was already identifed as an important aspect of computer programming

“As early as 1996, Batya Friedman and Helen Nissenbaum developed a typology of bias in computer systems that described the various ways human bias can be built into machine processes: “Bias can enter a [computer] system either through the explicit and conscious efforts of individuals or institutions, or implicitly and unconsciously, even in spite of the best of intentions”.  (Source:  Ethics and Algorithmic Processes for decision making and decision support )

Creating your own pathways through the cloud

Companies like Microsoft have many types of customers, but by embracing cloud they have multiplied their impact on IT’s everyday dilemma – the rogue customer.

Meet the customer where the customer is – a truism pervasive to sales and marketing speak over the past few years – is now also the overall motto where IT meets business.

James Staten, Chief Strategist Cloud and Enterprise at Microsoft, spent a few days in Stockholm at IP Expo Nordic  and a few minutes with me on the balcony overlooking the trade show floor. Just off the stage speaking about the end of the era of IaaS we were looking at the specifics behind his statement:

“Hybrid Cloud is the future and Microsoft will continue to invest in the dynamic interchange and complexity of public cloud and on premise computing.”

The Microsoft Cloud offers customers a global infrastructure with 30 available, and 36 announced, datacenter regions. Microsoft CEO Satya Nadella just confirmed this commitment on Oct 3, 2016 by adding several European countries to the list of countries hosting or acting as hubs for their datacentres. And by introducing a novel concept where access to customer data is controlled locally through a trustee – T-Systems International in Germany. Thus addressing the continuous resistence to placing and handling data outside of your jurisdiction which is particularly fierce in Germany.

The dilemma of empowerment and control

In 2010 we could still put everything into boxes and linear progression charts

4layers_of_cloud

This linear layered view of computing vs cloud as illustrated by industry expert R Wang in 2010 was a nice illustration of where the -as-a-Service had disrupted traditional IT – but this no longer applies: It is being disrupted by the citizens=users themselves.

“Just about 15% of the world’s developers have the highest level of skills required to build advanced and full scale deployments on Iaas (Infrastructure-as-a-Service) but there are 10 times as many developers who have excellent basic coding skills in various languages who are creating business value for the enterprise,” James Staten explains. “If you then consider that there are 80 times as many people in the world who can code and deploy to a selected cloud platform, there is a nightmare scenario out there from the IT Operations perspective which can inhibit innovation and growth.”

James Staten is a visionary. As a former Gartner and Forrester Analyst   and ex-CMO he is an expert at connecting the dots and creating a cohesive narrative.

To understand the reason why Microsoft believes in the hybrid cloud and is leaving the focus on -aaS combinations, you need to understand who your customers are and under which assumptions they operate.

 

Historically, IT called the shots when business needs were met in the enterprise. And even structure lovers like myself, can see that what the architecture of today’s large enterprises mostly resembles is a maze. But with today’s tools at their fingertips, customers want to do their own thing. And the challenge is on the IT management to keep it safe and secure despite everyone going rogue on them.

When basically everyone can or can learn to code, or at least subscribe to cloud based business process applications they could deploy themselves, the infrastructure has been disrupted by user behaviour. Just like a path created by people simply trying to find an easier way.

James Staten feels that if you support the developers by providing them with the tools they feel comfortable with as they navigate safely in the Cloud, you are also helping IT to stay in control of their infrastructure and protect their investement in existing platforms and processes. This is where among others the Microsoft Azure Security Center wants to help  IT managers sleep at night.

If we want to achieve true developer empowerment in this next generation of cloud, we have to encourage more coders to be productive with their existing skills. We can do this by letting them program with the languages they want to use — and are most appropriate to the type of app they are building — giving them reliable and consistent access to as broad a set of services as possible, and doing this in such a way that leverages open source and open standards. You want their processes to be painless and intuitive to encourage productivity and be applicable across the needs and services that your business operates and leverage where your customers are when they want you there. (Source: Geek.ly “Cloud Empowerment should not stop at highly skilled developers” by James Staten)

Star struck

When you meet people like that, who have visions that reach beyond and above, you should always remember that they are people who want to make the world a better place – in this case, James Staten even held my phone when we took the img_0928traditional SpeakerSelfie – and I am still slightly shaken by the encounter.

Hope to meet again soon at another conference somewhere in the universe to continue our conversation.

 

 

 

 

 

Oh, and if you would like to see what a rogue customer can look like, here’s one. (Photo courtesy of Miroslav Trzil)

< Disclaimer> Image has no connection to the interview topic or person interviewed.

d746d2b6-5230-4536-ad01-acc9c9a3f2ef-original

Affärshantering i molnet – så tänker vi (post in Swedish)

Hur öppen, flexibel och användbar är din plattform? Hur ofta behöver du uppgradera dina affärssystem, och när det väl händer: är det synkroniserat så att alla är uppe och på samma version på samma tid? Diskuterar ni Cloud teknologi vid fikabordet?

Utvecklas dina lösningar till mobila devices, eller måste du vrida på surfplattan för att få med hela interfacet, eftersom det var byggd för en datorskärm?

Kan du uppdatera dina kundmöten, godkänna fakturor, bygga workflows medan du springer över gatan mitt i Stockhkolms-trafiken? På din mobil?

Det är så man tänker på salesforce.com när man utvecklar lösningar för affärshantering i molnet. Det var Cloud Computing som gjorde det möjligt, men det är vi som får det att hända. Du och jag.

Om man verkligen vill förstå hur allt detta spelar ihop – hur teknologin används för marknad med smarta verktyg byggd i och för social, som säljverktyg och som connected kundservice – då kommer salesforce.com faktisk att bjuda på gratis inspiration. Den 15 oktober på Grand Hotel i Stockholm hålls Customer Company Tour Nordics – en gratisk heldagskonferens (på engelska) med runt 30 utställare och keynote och eftermiddagssessioner där kunder berättar om hur de använder teknologin, hur de skapar nya  verktyg eller bygger egna appar på force.com plattformen, och helt enkelt har mera kul.

Image

Huvudtalare är Erik Hallberg, VD för TeliaSonera International Carrier som bygger sin IT arkitektur på force.com plattformen och använder Salesforce både som internt collaborationsverktyg, i sälj, marknad och kundservice. Och Line Dahle från norska marinförsäkringsbolaget Gard AS – ett 120 år gammalt traditionellt företag som nu jobbar i molnet med kommunikation, intern kollaboration och hantering av ärenden och kundservice. För speciellt i den branschen måste man kunna respondera snabbt när det smäller till på ett skepp någonstans ute i världen.

Dessutom kommer det finnas demo stations, så att man kan testa själv eller få en av salesforce.com’s egna utvecklare visa. Och session som omhandlar allt från hur President Obama vann valet med hjälp av Salesforce och sociala medier till hur stora företag flytter hela sin verksamhet till en mobil, öppen, social plattform för att möta kunden där kunden är.

Du kan registrera dig här – det är gratis.