AI humanity- Artificial intelligence, whether we realize it or not, is a part of our daily lives and is here to stay. But according to a study out of the Center for the Governance of AI at Oxford University, Americans aren’t so sure how they feel about AI or what AI advancements will mean in our day-to-day lives. Kelsey Piper wrote about the report for Vox; she spoke to Marketplace host Kai Ryssdal about her reporting and how whether AI researchers agree with the report’s findings. The following is an edited transcript of their conversation.
Kai Ryssdal: I’m going to let you interpret this study for us, because you look at this stuff and you report on it widely. I do want to start, though, on a somewhat hesitant note, and that is to point out that one of these researchers told you that people that they talked to in this survey are not convinced that AI, advanced artificial intelligence, is going to be to the benefit of humanity. That’s slightly troubling.
Kelsey Piper: Yeah, I do think that’s slightly troubling. One thing that’s going on there is a lot of skepticism about the AI systems we have now and whether they’re helping. And then separately from that, a lot of skepticism about what AI is going to look like in 10 years.
Ryssdal: OK, so let’s talk about what AI we have now. Are we talking, like, Siri and sort of the semi-autonomous car stuff that we’ve got going on, is that what you’re talking about?
Piper: So AI today, I think people point to Siri; they point to the translation services that have gotten a lot better over the last couple of years; to semi-autonomous vehicles; also to the algorithm that Amazon briefly debuted to identify good hires that they found was actually using gender in order to decide who is likely to be a software engineer. Also the algorithms that send your notifications on Twitter and Facebook that have been criticized for being addictive and encouraging people to spend more time on their phones than they are.
Ryssdal: OK, so that’s what we’ve got now with all of those problems, and I confess I had not heard about the gender thing at Amazon. Are we counting on AI to get smarter by itself, or are we going to improve the inputs to the algorithms as we move toward advanced AI?
Piper: I think there’s certainly a lot of experts who are sort of warning us here that yeah, if AI safety research is moving slower than AI capabilities research, then we’re going to have extremely powerful systems that still aren’t doing what we actually want them to do and are sort of executing on their badly specified goals in ways that can be tremendously destructive.
Ryssdal: OK, wait. Let’s be clear about this. The people working on AI safety, we’ll call it, are not the same people working on making AI smarter?
Piper: So there are definitely people whose work involves both. But there are a lot of people who are working on making AI smarter who are not working on making AI safer. And the more conservative people I know working on AI safety tend to actually say, “We don’t think we should be making AI smarter right now, we think we need to sort of work on transparency and interpretability that’s understanding what algorithms are doing. We need to get that stuff right before we do our capabilities research.”