ERP News

AI

Making AI Secret Could Prevent Us From Making It Better

573 0
AI Secret

AI Secret

Should the future of our society hinge on a secret? A new report suggests that, in order to ensure the safety of a society increasingly reliant upon artificial intelligence, we need to make sure that it’s kept in the hands of a select few. Ultimately, though, that decision could do more harm than good.

In the report, 20 researchers from several future-focused organizations, including OpenAI, Endgame, the Electronic Frontier Foundation, Oxford’s Future of Humanity Institute, and the Center for a New American Security, express the fear that, in the wrong hands, AI could cause the downfall of society. In fact, the report outlines several scenarios — like smarter phishing scams, malware epidemics, and robot assassins — that haven’t happened yet, but don’t seem too far from the realm of possibility.

That’s why, they argue, AI’s inner-workings may need to remain secret — to keep the technology out of bad guys’ hands. The report suggests that a regulatory agency, or the AI research community of their own volition, could do this by considering “different openness models.” These models would shift the AI research community away from the current environment of increasing transparency, where publishing algorithms or making them open source is becoming more common. Instead, the researchers recommend abstaining or delaying the publication of AI findings to restrict their dissemination to parties that would use them with less-than-admirable intentions.

We can probably all agree that it’s not such a good idea to clear the path for malicious actors to use AI — recent spate of cyber attacks across the globe certainly shows that they’re certainly out there and willing to use whatever tools are available to them.

But restricting access to AI won’t prevent people evil-doers from gaining access to it. In fact, it might inhibit people trying to use it for good.

First, the researchers don’t say much about just how likely their Black Mirror-esque scenarios are, U.K. analyst Jon Collins pointed out in a blog on Gigaom. “We can all conjure disaster scenarios, but it is not until we apply our expertise and experience to assessing the risk, that we can prioritize and (hopefully) mitigate any risks that emerge.” Collins wrote.

Plus, artificial intelligence research is already shrouded in secrecy. Companies, from Google to Microsoft to Amazon, mostly shroud their algorithms under the protective blanket of proprietary information. Today most AI researchers have been unable to replicate AI studies, which makes it hard for researchers to establish how scientifically trustworthy they are.

Read More Here

Article Credit: Futurism

Leave A Reply

Your email address will not be published.

*

code