ERP News

AI

Telling AI to not replicate itself is like telling teenagers to just not have…

792 0

Do humans have the capacity for safe AI? Our history shows innovation and technology advancements are replete with unintended consequences.

Who knew that widespread social-media adoption would lead to disinformation campaigns aimed at undermining liberal democracy, when it was originally thought it would increase civic engagement? After all, AI not only enables the development of autonomous vehicles, but also autonomous weapons. Who wants to contemplate a possible future where self-aware AI becomes catatonically depressed while in possession of nuclear launch codes?

While we look forward to a future where humans use AI to enhance our existence, we need to consider what steps are being taken to get us there. In particular, we should be concerned with the fact that AI is being developed to replicate itself, potentially embedding biases into the algorithms that will underpin and drive our tomorrow—and repeating them, writ large.

The consequences of this could be dire. Because while to err is to be human, to truly foul things up requires a computer.

Learning to learn

AI will soon become capable of self-replication—of learning from and creating itself in its own image. What currently keeps AI from learning “too fast” and spiraling out of control is that it requires a vast amount of data on which to be trained. To train a deep-learning algorithm to recognize a cat with a cat-fancier’s level of expertise, you first must feed it tens or even hundreds of thousands of images of felines, capturing a huge amount of variation in size, shape, texture, lighting, and orientation. It would be much more efficient if, like a person, an algorithm could develop an idea about what makes a cat a cat from fewer examples, just as we humans don’t need to see 10,000 cats to recognize one sauntering down the street.

A Boston-based startup, Gamalon, has pioneered a technique it calls “Bayesian program synthesis” to build algorithms capable of learning from fewer examples. A probabilistic program can determine, for instance, that it’s highly probable that cats have ears, whiskers, and tails. As further examples are provided, the code behind the model is rewritten, and the probabilities tweaked. At a certain point, the AI program takes over, and models are created on their own. In other words, it is learning how to teach itself instead of us needing to teach it.

Read More Here

Article Credit: Quartz

Leave A Reply

Your email address will not be published.

*

code