Skip to content

AI bias is an ongoing problem, but there’s hope for a minimally biased future

Removing bias from AI is nearly impossible, but one expert sees a future with potentially bias-free decisions made by machines.

More about artificial intelligence

TechRepublic’s Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence. The following is an edited transcript of their conversation.

Karen Roby: We talk a lot about AI and the misconceptions involved here. What is the biggest misconception? Do you think it’s that people just think that it should be perfect, all of the time?

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

Mohan Mahadevan: Yeah, certainly. I think whenever we try to replace any human activity with machines, the expectation from us is that it’s perfect. And we want to very much focus on finding problems, every little nitpicky problem that the machine may have.

Karen Roby: All right, Mohan. And if you could just break down for us, why does bias exist in AI?

Mohan Mahadevan: AI is driven primarily by data. AI refers to the process by which machines learn how to do certain things, driven by data. Whenever you do that, you have a particular dataset. And any dataset, by definition, is biased, because there is no such thing as a complete dataset, right? And so you’re seeing a part of the world, and from that part of the world, you’re trying to understand what the whole is like. And you’re trying to model behavior on the whole. Whenever you try to do that, it is a difficult job. And in order to do that difficult job, you have to delve into the details of all the aspects, so that you can try to reconstruct the whole as best as you can.

Karen Roby: Mohan, you’ve been studying and researching AI for many years now. Talk a little bit about your role, there at Onfido, and what your job entails.

Mohan Mahadevan: Onfido is a company that takes a new approach to digital identity verification. So what we do is we connect the physical identity to a digital identity, thereby enabling you to prove who you are, to any service or product that you wish to access. It could be opening a bank account, or it could be renting a car, or opening an account and buying cryptocurrency, in these days. What I do, particularly, is that I run the computer vision and the AI algorithms that power this digital identity verification.

SEE: Digital transformation: A CXO’s guide (free PDF) (TechRepublic)

Karen Roby: When we talk about fixing the problem, Mohan, “how” is a very complex issue when we talk about bias. How do we fix it? What type of intervention is needed at different levels?

Mohan Mahadevan: I’ll refer back to my earlier point, just for a minute. So what we covered there was that, any dataset by itself is incomplete, which means it’s biased in some form. And then, when we build algorithms, we then exacerbate that problem by adding more bias into the situation. Those are two things first that we need to really pay close attention to and handle well. Then what happens is, the researchers that formulate these problems, they bring in their human bias into the problem. That could either fix the problem or make it worse, depending on the motivation of the researchers and how focused they are on solving this particular problem. Lastly, let us assume that all of these things worked out really well. OK? The researchers were unbiased, the dataset completion problem was solved.

The algorithms were modeled correctly. Then you have this perfect AI system that is currently unbiased or minimally biased. There’s no such thing as unbiased. It’s minimally biased. Then, you take it and apply it in the real world. You take it to the real world. And the real world data is always going to drift and move and vary. So, you have to pay close attention to monitor these systems when they’re deployed in the real world, to see that they remain minimally biased. And you have to take corrective actions as well, to correct for this bias as it happens in the real world.

SEE: Hyperautomation takes RPA to the next level, allowing workers to do more important tasks (TechRepublic) 

Karen Roby: I think people hear a lot about bias and they think they know what that means. But what does it really mean, when bias exists in an AI?

Mohan Mahadevan: In order to understand the consequences, let’s look at all the stakeholders in the equation. You have a company that builds a product based on AI. And then you have a consumer that consumes that product, which is driven by AI. So let’s look at both sides, and the consequences are very different on both sides.

On the human side, if I get a loan rejected, it’s terrible for me. Right? Even if, for all the Indian people … So I’m from India. And so for all the Indian people, if a AI system was proven to be fair, but I get my loan rejected, I don’t care that it’s fair for all Indian people. Right? It affects me very personally and very deeply. So, as far as the individual consumer goes, the individual fairness is a very critical component.

As far as the companies go, and the regulators and the governments go, they want to make sure that no company is systematically excluding any group. So they don’t care so much about individual fairness, they look at group fairness. People tend to think of group fairness and individual fairness as separate things. If you just solve the group, you’re OK. But the reality is, when you look at it from the perspective of the stakeholders, they’re very different consequences.

Karen Roby: We’ll flip the script a little bit here, Mohan. In terms of the positives with AI, what excites you the most?

SEE: 9 questions to ask when auditing your AI systems (TechRepublic)

Mohan Mahadevan: There are just so many things that excite me. But in regards to bias itself, I’ll tell you. Whenever a human being is making a decision on any kind of thing, whether it be a loan, whether it be an admission or whatever, there’s always going to be a conscious and unconscious bias, within each human being. And so, if you think of an AI that looks at the behavior of a large number of human beings and explicitly excludes the bias from all of them, the possibility for a machine to be truly or very minimally biased is very high. And this is exciting, to think that we might live in a world where machines actually make decisions that are minimally biased.

Karen Roby: It definitely impacts us all in one way or another, Mohan. Wrapping up here, there’s a lot of people that are scared of AI. Anytime you take people, humans, out of the equation, it’s a little bit scary.

Mohan Mahadevan: Yeah. I think we should all be scared. I think this is not something that we should take lightly. And we should ask ourselves the hard questions, as to what consequences there can be of proliferating technology for the sake of proliferating technology. So, it’s a mixed bag, I wish I had a simple answer for you, to say, “This is the answer.” But, overall, if we look at machines like the washing machine, or our cars, or our little Roombas that clean our apartments and homes, there’s a lot of really nice things that come out of even AI-based technologies today.

Those are examples of what we think of as old-school technologies, that actually use a lot of AI today. Your Roomba, for example, uses a lot of AI today. So it certainly makes our life a lot easier. The convenience of opening a bank account from the comfort of your home, in these pandemic times, oh, that’s nice. AI is able to enable that. So I think there’s a lot of reason to be excited about AI, the positive aspects of AI.

The scary parts I think come from several different aspects. One is bias-related. When an AI system is trained poorly, it can generate all kinds of systematic and random biases. That can cause detrimental effects on a per-person and on a group level. So we need to protect ourselves against those kinds of biases. But in addition to that, when it is indiscriminately used, AI can also lead to poor behaviors on the part of humans. So, at the end of the day, it’s not the machine that’s creating a problem, it’s how we react to the machine’s behavior that creates bigger problems, I think.

Both of those two areas are important. It’s not only the machines giving us good things, but also struggling with bias when the humans don’t build them right. Then, when the humans use them indiscriminately and in the wrong way, they can create other problems as well.

Also see

TechRepublic’s Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence.

” data-credit=”Image: Mackenzie Burke”>20210429-aibias-karen-fixed.jpg20210429-aibias-karen-fixed.jpg

TechRepublic’s Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence.

Image: Mackenzie Burke