ERP News


358 0


GOOGLE AI- GOOGLE CEO SUNDAR Pichai brought good tidings to investors on parent company Alphabet’s earnings call last week. Alphabet reported $39.3 billion in revenue last quarter, up 22 percent from a year earlier. Pichai gave some of the credit to Google’s machine learning technology, saying it had figured out how to match ads more closely to what consumers wanted.

One thing Pichai didn’t mention: Alphabet is now cautioning investors that the same AI technology could create ethical and legal troubles for the company’s business. The warning appeared for the first time in the “Risk Factors” segment of Alphabet’s latest annual report, filed with the Securities and Exchange Commission the following day:

“[N]ew products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.”

Companies must use the risk factors portion of their annual filings to disclose foreseeable troubles to investors. That’s supposed to keep the free market operating. It also provides companies a way to defuse lawsuits claiming management hid potential problems.

It’s not clear why Alphabet’s securities lawyers decided it was time to warn investors of the risks of smart machines. Google declined to elaborate on its public filings. The company began testing self-driving cars on public roads in 2009, and has been publishing research on ethical questions raised by AI for several years.

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

Microsoft also has been investing deeply in AI for many years, and in 2016 introduced an internal AI ethics boardthat has blocked some contracts seen as risking inappropriate use of the technology.

Microsoft did not respond to queries about the timing of its disclosure rogue AI. Both Microsoft and Alphabet have played prominent roles in a recent flowering of concern and research about ethical challenges raised by artificial intelligence. Both have already experienced them first hand.

Last year, researchers found Microsoft’s cloud service was much less accurate at detecting the gender of black womenthan white men in photos. The company apologized and said it has fixed the problem. Employee protests at Google forced the company out of a Pentagon contract applying AI to drone surveillance footage, and it has censored its own Photos service from searching for apes in user snaps after an incident in which black people were mistaken for gorillas.

Read More Here

Article Credit: Wired

Leave A Reply

Your email address will not be published.