ERP News

AI

ARTIFICIAL INTELLIGENCE SEEKS AN ETHICAL CONSCIENCE

610 0

LEADING ARTIFICIAL-INTELLIGENCE RESEARCHERS gathered this week for the prestigious Neural Information Processing Systems conference have a new topic on their agenda. Alongside the usual cutting-edge research, panel discussions, and socializing: concern about AI’s power.

The issue was crystallized in a keynote from Microsoft researcher Kate Crawford Tuesday. The conference, which drew nearly 8,000 researchers to Long Beach, California, is deeply technical, swirling in dense clouds of math and algorithms. Crawford’s good-humored talk featured nary an equation and took the form of an ethical wake-up call. She urged attendees to start considering, and finding ways to mitigate, accidental or intentional harms caused by their creations. “Amongst the very real excitement about what we can do there are also some really concerning problems arising,” Crawford said.

One such problem occurred in 2015, when Google’s photo service labeled some black people as gorillas. More recently, researchers found that image-processing algorithms both learned and amplified gender stereotypes. Crawford told the audience that more troubling errors are surely brewing behind closed doors, as companies and governments adopt machine learning in areas such as criminal justice, and finance. “The common examples I’m sharing today are just the tip of the iceberg,” she said. In addition to her Microsoft role, Crawford is also a cofounder of the AI Now Institute at NYU, which studies social implications of artificial intelligence.

Concern about the potential downsides of more powerful AI is apparent elsewhere at the conference. A tutorial session hosted by Cornell and Berkeley professors in the cavernous main hall Monday focused on building fairness into machine-learning systems, a particular issue as governments increasingly tap AI software. It included a reminder for researchers of legal barriers, such as the Civil Rights and Genetic Information Nondiscrimination acts. One concern is that even when machine-learning systems are programmed to be blind to race or gender, for example, they may use other signals in data such as the location of a person’s home as a proxy for it.

Some researchers are presenting techniques that could constrain or audit AI software. On Thursday, Victoria Krakovna, a researcher from Alphabet’s DeepMind research group, is scheduled to give a talk on “AI safety,” a relatively new strand of work concerned with preventing software developing undesirable or surprising behaviors, such as trying to avoid being switched off. Oxford University researchers planned to host an AI-safety themed lunch discussion earlier in the day.

Read More Here

Article Credit: Wired

Leave A Reply

Your email address will not be published.

*

code