A.I. and Diversity- How to create more diverse workplaces and how to use artificial intelligence ethically are among the more challenging dilemmas facing business and government.
While the issues may appear to have little in common besides their complexity, they do overlap. Recently, for example, according to news reports Amazon abandoned a hiring tool that used artificial intelligence because it favored men.
These topics were the subject of two separate task forces that met at last week’s DealBook conference. Each task force, composed of about a dozen experts and industry leaders, met for an hour, emerging with some specific guidelines and focus areas that were shared with the conference and could be taken back to companies and other organizations for discussion.
Companies must be aware of and recognize that algorithms are not neutral, but created by humans with biases and beliefs and make every effort to eliminate those biases.
It is far too easy to assume that technology has an objectivity that humans don’t. But the reality is that “artificial intelligence and machine learning and algorithms in general are designed by none other than us — people,” said Dipayan Ghosh, a fellow at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School.
“A.I. takes input data and races off to make inference and decision making about the world at lightening pace,” he said, and those inferences will include unacknowledged biases unless people are willing to recognize them and invest the time and money to weed them out.
Frida Polli, co-founder and chief executive of Pymetrics, which uses A.I. to recruit employees for companies, showed how biases could arise and how they could be diminished.
Algorithms used in A.I. are created using training data sets. If a training data has an overrepresentation of any gender or ethnicity, the features that distinguish that group may be overweighted.
To confront the problem, she said her company tested an algorithm on a reference group of people of different genders and ethnicities to check for bias. If there’s an imbalance in the test run — far more men, for example, or whites are predicted to be good hires rather than women or other races — then “we can look at de-weighting features,” she said, that alter the algorithm.