With the excitement of every technological advancement comes a wave of fear and uncertainty. We’ve seen this scenario play out repeatedly since the Industrial Revolution as people wrestled with the impact of new technology on their lives and work. Today we see that fear bubble up in the wake of every AI breakthrough.
Despite huge progress in recent years, AI is still in its early days and with that comes a level of uncertainty. This uncertainty is only compounded when glitches arise or expectations outweigh reality, which leads to misunderstanding and anxiety. As an outspoken AI critic, Elon Musk capitalizes on this misunderstanding by painting pictures of a looming AI apocalypse even as he embeds powerful AI into Tesla’s vehicles. All of this shows that, to some degree, we find ourselves caught up in a dangerous and unnecessary hype cycle.
We have to reach past that unfounded fear. Here is the reality: There is no credible research today supporting these doomsday scenarios. They are compelling fictions. I enjoyed watching the Terminator just like many other kids my age but these entertaining scenarios distract us from addressing the immediate threats posed by AI.
We face major issues around bias and diversity that are much more human and much more immediate than Singularities and robot uprisings: training data with embedded biases and a lack of diversity both in the field and our datasets.
By training AI on biased data, we might unintentionally instill our own biases and prejudices in AI. Left unchecked, biases will lead to AI that benefits some at the expense of others. Without increasing the diversity of the field, some will have a far greater influence over the hidden decisions behind the creation of AI. As AI integrates into decision-making processes that have more impact on individual lives—hiring, loan applications, judicial review, and medical decisions—we will need to be vigilant against it absorbing our worst tendencies.