AI Impact- Tech companies Microsoft and Google are sounding the alarm on just how harmful artificial intelligence can be for investors and brands alike. A.I. is still the most disputed part of technology and is becoming increasingly more commonplace as companies look to incorporate it across their platforms. While critics call for justification on the use of the technology and in some cases an all-out ban, A.I. continues to be a billion dollar industry, with many tech companies willing to withstand a tarnished brand reputation for lucrative profits.
Why This Matters
In Google’s recently released 2018 SEC annual report it highlighted their brand issues around A.I. that could impact the company’s bottom line. “New products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.”
Last August in Microsoft’s SEC annual report, it wrote that “A.I. algorithms may be flawed. Datasets may be insufficient or contain biased information. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”
One of the the most controversial parts of A.I. is facial recognition technology and Microsoft is asking governments around the world to regulate the use of it. Previously CultureBanx noted they want to ensure the technology which has higher error rates for African Americans, does not invade personal privacy or become a tool for discrimination or surveillance. Specifically the tech giants cloud service was much less accurate at detecting the gender of black women than white men in photos.
Combating A.I. Bias:
Research shows commercial artificial intelligence systems tend to have higher error rates for women and black people. Some facial recognition systems would only confuse light-skin men 0.8% of the time and would have an error rate of 34.7% for dark-skin women. Just imagine surveillance being used with these flawed algorithms.