A little over a week after the fervor surrounding Google’s involvement in the Department of Defense’s Project Maven, an autonomous drone program, showed signs of abating, another machine learning controversy returned to the headlines: local law enforcement deploying Amazon’s Rekognition, a computer vision service with facial recognition capabilities.
In a letter addressed to Amazon CEO Jeff Bezos, 19 groups of shareholders expressed concerns that Rekognition’s facial recognition capabilities will be misused in ways that “violate [the] civil and human rights” of “people of color, immigrants, and civil society organizations.” And they said that it set the stage for sales of the software to foreign governments and authoritarian regimes.
Amazon, for its part, said in a statement that it will “suspend … customer’s right to use … services [like Rekognition]” if it determines those services are being “abused.” It has so far declined, however, to define the bright-line rules that would trigger a suspension.
AI ethics is a nascent field. Consortia and think tanks like the Partnership on AI, Oxford University’s AI Code of Ethics project, Harvard University’s AI Initiative, and AI4All have worked to establish preliminary best practices and guidelines. But Francesca Rossi, IBM’s global leader for AI ethics, believes there’s more to be done.
“Each company should come up with its own principles,” she told VentureBeat in a phone interview. “They should spell out their principles according to the space that they’re in.”
There’s more at stake than government contracts. As AI researchers at tech giants like Google, Microsoft, and IBM turn their attention to health care, the obfuscatory nature of machine learning algorithms runs the risk of alienating those who stand to benefit: patients.
People might have misgivings, for example, about systems that forecast a patient’s odds of survival if the systems don’t make clear how they’re drawing their conclusions. (One such AI from the Google Brain team takes a transparent approach, showing which PDF documents, handwritten charts, and other data informed its results.)
“There’s a difference between a doctor designing therapy for a patient [with the help of AI] and algorithms that can recognize books,” Rossi explained. “We often don’t even recognize our biases when we’re making decisions, [and] these biases can be injected into the training data sets or into the model.”
Already, opaque policies around data collection have landed some AI researchers in hot water. Last year, the Information Commissioner’s Office, the U.K.’s top privacy watchdog, ruled that the country’s National Health Service improperly shared the records of 1.6 million patients in an AI field trial with Alphabet subsidiary DeepMind.