ERP News

AI

AI is not dangerous, but human bias is

362 0
AI is not dangerous

AI is not danger

AI is not dangerous- AI is rapidly becoming part of the fabric of our daily lives as it moves out of academia and research labs and into the real world.

I’m not concerned about AI superintelligence “going rogue” and challenging the survival of the human race – that’s science fiction and unsupported by any scientific research today. But I do believe we have to think about any unintended consequences of using this technology.

Kathy Baxter, Salesforce’s ethical AI Practice architect, put the problem well. Pointing out that AI is not sentient but merely a tool and therefore morally neutral, she reminded us that its use depends on the criteria we humans apply to its development. “While AI has the potential to do tremendous good, it can also have the potential for unknowingly harming individuals.”

Unmanaged AI is a mirror for human bias

One way that AI can cause harm is when algorithms reflect our human biases in the datasets that organizations collect. The effects of these biases can compound in the AI era, as the algorithms themselves continue to “learn” from the data.

Let’s imagine, for example, that a bank wants to predict whether it should give someone a loan. Let’s also imagine that in the past, this particular bank hasn’t given as many loans to women or people from certain minorities.

These features will be present in that bank’s dataset – which could make it easier for an AI algorithm to draw the conclusion that women or people from minority groups are more likely to be credit risks and should therefore not be given loans.

In other words, the lack of data on loans to certain people in the past could have an impact on how the bank’s AI program will treat their loan applications in the future. The system could pick up a bias and amplify it – or at the very least, perpetuate it.

Of course, AI algorithms could also be gamed by explicit prejudice, where someone curates data in an AI system in a way that excludes, say, women of colour being considered for loans.

Either way, AI is only as good as the data – specifically, the “training data” – we give it. All this means it’s vital for anyone involved with training AI programs to consider just how representative any training data they use actually is.

As Kathy put it, by simply plucking data from the internet to train AI programmes there’s a good chance that we will “magnify the stereotypes, the biases, and the false information that already exist”.

Embedding diversity and removing bias

So how do we manage the threats of biased AI? It starts with humans proactively identifying and managing any such bias – which includes training AI systems to identify it.

Read More Here

Article Credit: Weforum

Leave A Reply

Your email address will not be published.

*

code