ERP News

Microsoft is cutting off some sales over AI ethics, top researcher Eric Horvitz says

290 0

Concerns over the potential abuse of artificial intelligence technology have led Microsoft to cut off some of its customers, says Eric Horvitz, technical fellow and director at Microsoft Research Labs.

Horvitz laid out Microsoft’s commitment to AI ethics today during the Carnegie Mellon University – K&L Gates Conference on Ethics and AI, presented in Pittsburgh.

One of the key groups focusing on the issue at Microsoft is the Aether Committee, where “Aether” stands for AI and Ethics in Engineering and Research.

“It’s been an intensive effort … and I’m happy to say that this committee has teeth,” Horvitz said during his lecture.

He said the committee reviews how Microsoft’s AI technology could be used by its customers, and makes recommendations that go all the way up to senior leadership.

“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’ ”

Horvitz didn’t go into detail about which customers or specific applications have been ruled out as the result of the Aether Committee’s work, although he referred to Microsoft’s human rights commitments.

Over the past year or so, the company has been providing government and industry customers with a cloud-based suite of Microsoft Cognitive Services, including face recognition and emotion recognition.

Ethical issues surrounding AI and large-scale data analysis have gained much more attention in the wake of reported lapses in data privacy safeguards by Facebook. That company’s CEO, Mark Zuckerberg, is scheduled to address the controversy this week during high-profile congressional hearings.

One of the big concerns has to do with how Cambridge Analytica took advantage of Facebook data to target voters during the 2016 presidential campaign. Horvitz listed voter manipulation as one of the potential misuses for AI applications — along with facilitating human rights violations, raising the risk of death or serious injury, or denying resources and services.

Addressing such concerns might require new regulatory schemes. Horvitz said he could imagine a role for “an Underwriters Laboratories or an FDA … somebody looking at this as best practice.”

Read More Here

Article Credit: GW

Leave A Reply

Your email address will not be published.

*

code