ERP News

AI

AI and the New Dimensions of Hyperreality

245 0
AI Hyperreality

AI Hyperreality

AI Hyperreality- Tesla’s Elon Musk is battling on every front of our rapidly unfolding future and may have found the ultimate means of insulting our (artificial) intelligence.

Elon Musk, the CEO of Tesla and founder of SpaceX, famously warned us that artificial intelligence could lead to the extinction of humanity. That explains why he founded and funded the nonprofit Open AI.

Open AI appears to have defined for itself a two-fold mission: to create a monster and protect the public from it. At the same time as it boasts about the unparalleled prowess of its new AI model, called GPT2, Open AI has sent us a warning: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model.”

Human intelligence might interpret it this way: Our product is so powerful we cannot put it in your or anyone else’s hands — not for the moment. While you’re waiting, you can have a lighter version, knowing that we are protecting you against the evil people of the world. In other words, there is nothing to fear. Open AI is apparently committed to following Google’s recently abandoned maxim: “Don’t be evil.”

Jack Clark, head of policy at Open AI, explained why this was important: “We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”

Here is today’s 3D definition:

Rigorous:

For innovative thinkers and experimenters like Elon Musk, not totally casual

Contextual note

The idea of building a road as one travels across it could only come from one of The Daily Devil’s Dictionary’s favorite hyperreal heroes, Elon Musk. With hyperreal innovators, you never know what to think, even if you can’t help reacting. That’s what they’re good at, making people react and believe they have changed the world or are about to do so.

The mission statement on the Open AI website says: “Discovering and enacting the path to safe artificial general intelligence.” So which is it: discovering (i.e., inventing something new) or protecting people from a category of human activity they call “artificial general intelligence”?

Like Albert Einstein’s theory of relativity, it sounds as if there are two distinct things: general and special. Artificial general intelligence (AGI) is what threatens humanity because it can potentially be applied for any purpose. Instead of conveniently solving specific problems, it could have a direct impact on the way humans understand the world or rather think that they understand the world.

Read More Here

Article Credit: FO

Leave A Reply

Your email address will not be published.

*

code