ERP News

AI

Leave A.I. Alone

693 0
Leave A.I. Alone

Leave A.I. Alone

December was a big month for advocates of regulating artificial intelligence. First, a bipartisan group of senators and representatives introduced the Future of A.I. Act, the first federal bill solely focused on A.I. It would create an advisory committee to make recommendations about A.I. — on topics including the technology’s effect on the American work force and strategies to protect the privacy rights of those it impacts. Then the New York City Council approved a first-of-its-kind bill that once signed into law will create a task force to examine its own use of automated decision systems, with the ultimate goal of making its use of algorithms fairer and more transparent.

Perhaps not coincidentally, these efforts also overlap with increasing calls to regulate artificial intelligence along with claims by the likes of Elon Muskand Stephen Hawking that it poses a threat to humanity’s literal survival.

But this push for broad legislation to regulate A.I. is premature.

To begin with, even experts can’t agree on what, exactly, constitutes artificial intelligence. Take the recent report released by the AI Now Institute, aimed at creating a framework for ethically implementing A.I. While itself focused on A.I., the report also acknowledges that no commonly accepted definition of A.I. exists, which it describes loosely as “a broad assemblage of technologies … that have traditionally relied on human capacities.”

Artificial intelligence” is all too frequently used as a shorthand for software that simply does what humans used to do. But replacing human activity is precisely what new technologies accomplish — spears replaced clubs, wheels replaced feet, the printing press replaced scribes, and so on. What’s new about A.I. is that this technology isn’t simply replacing human activities, external to our bodies; it’s also replacing human decision-making, inside our minds.

The challenges created by this novelty should not obscure the fact that A.I. itself is not one technology, or even one singular development. Regulating an assemblage of technology we can’t clearly define is a recipe for poor laws and even worse technology.

Indeed, the challenges A.I. poses aren’t entirely new. We’ve already successfully regulated it in the past — we just didn’t call it “artificial intelligence.” In the 1960s and 1970s, for example, the financial industry began to rely on complex statistical modeling and huge computerized databases to make credit decisions, automating what had been a more manual process of approving or denying credit to borrowers.

Those ethical and legal challenges associated with these models so captivated the public’s attention that in the summer of 1970, Newsweek ran a cover story titled “Is Privacy Dead?” detailing the “massive flanking attack” of computers on modern society. Growing awareness of that threat led to broad appeals that echo modern proposals to regulate A.I. “Eventually we have to set up an agency to regulate the computers,” Senator Sam Ervin of North Carolina said in 1970.

Read More Here

Article Credit: The New York Times

Leave A Reply

Your email address will not be published.

*

code