Home News Ethicists fire back at ‘AI Pause’ letter they say ‘ignores the actual harms’

Ethicists fire back at ‘AI Pause’ letter they say ‘ignores the actual harms’

by WeeklyAINews
0 comment

A gaggle of well-known AI ethicists have written a counterpoint to this week’s controversial letter asking for a six-month “pause” on AI improvement, criticizing it for a concentrate on hypothetical future threats when actual harms are attributable to misuse of the tech at present.

Hundreds of individuals, together with such acquainted names as Steve Wozniak and Elon Musk, signed the open letter from the Way forward for Life institute earlier this week, proposing that improvement of AI fashions like GPT-4 ought to be placed on maintain with the intention to keep away from “lack of management of our civilization,” amongst different threats.

Timnit Gebru, Emily M. Bender, Angelina McMillan-Main and Margaret Mitchell are all main figures within the domains of AI and ethics, recognized (along with their work) for being pushed out of Google over a paper criticizing the capabilities of AI. They’re presently working collectively on the DAIR Institute, a brand new analysis outfit geared toward finding out and exposing and stopping AI-associated harms.

However they had been to not be discovered on the listing of signatories, and now have published a rebuke calling out the letter’s failure to interact with current issues attributable to the tech.

“These hypothetical dangers are the main focus of a harmful ideology known as longtermism that ignores the precise harms ensuing from the deployment of AI techniques at present,” they wrote, citing employee exploitation, information theft, artificial media that props up current energy constructions and the additional focus of these energy constructions in fewer arms.

The selection to fret a few Terminator- or Matrix-esque robotic apocalypse is a pink herring when we’ve, in the identical second, stories of corporations like Clearview AI being used by the police to essentially frame an innocent man. No want for a T-1000 while you’ve obtained Ring cams on each entrance door accessible through on-line rubber-stamp warrant factories.

See also  Senate letter to Meta on LLaMA leak is a threat to open-source AI, say experts

Whereas the DAIR crew agree with among the letter’s goals, like figuring out artificial media, they emphasize that motion should be taken now, on at present’s issues, with treatments we’ve out there to us:

What we want is regulation that enforces transparency. Not solely ought to it all the time be clear once we are encountering artificial media, however organizations constructing these techniques also needs to be required to doc and disclose the coaching information and mannequin architectures. The onus of making instruments which are protected to make use of ought to be on the businesses that construct and deploy generative techniques, which implies that builders of those techniques ought to be made accountable for the outputs produced by their merchandise.

The present race in direction of ever bigger “AI experiments” shouldn’t be a preordained path the place our solely selection is how briskly to run, however slightly a set of selections pushed by the revenue motive. The actions and decisions of companies should be formed by regulation which protects the rights and pursuits of individuals.

It’s certainly time to behave: however the focus of our concern shouldn’t be imaginary “highly effective digital minds.” As a substitute, we should always concentrate on the very actual and really current exploitative practices of the businesses claiming to construct them, who’re quickly centralizing energy and rising social inequities.

By the way, this letter echoes a sentiment I heard from Uncharted Energy founder Jessica Matthews at yesterday’s AfroTech occasion in Seattle: “You shouldn’t be afraid of AI. You have to be afraid of the individuals constructing it.” (Her answer: grow to be the individuals constructing it.)

See also  FTC hosts challenge to stop harms of voice cloning AI

Whereas it’s vanishingly unlikely that any main firm would ever conform to pause its analysis efforts in accordance with the open letter, it’s clear judging from the engagement it obtained that the dangers — actual and hypothetical — of AI are of nice concern throughout many segments of society. But when they gained’t do it, maybe somebody must do it for them.

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.