Home News OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk

OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk

by WeeklyAINews
0 comment

Make manner for one more headline-grabbing AI coverage intervention: A whole lot of AI scientists, teachers, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI pc scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to call just a few — have added their names to a press release urging world consideration on existential AI danger.

The assertion, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit known as the Center for AI Safety (CAIS), seeks to equate AI danger with the existential harms posed by nuclear apocalypse and requires policymakers to focus their consideration on mitigating what they declare is ‘doomsday’ extinction-level AI danger.

Right here’s their (deliberately transient) assertion in full:

Mitigating the danger of extinction from AI needs to be a worldwide precedence alongside different societal-scale dangers equivalent to pandemics and nuclear warfare.

Per a brief explainer on CAIS’ web site the assertion has been saved “succinct” as a result of these behind it are involved to keep away from their message about “a few of superior AI’s most extreme dangers” being drowned out by dialogue of different “essential and pressing dangers from AI” which they nonetheless suggest are getting in the way in which of dialogue about extinction-level AI danger.

Nevertheless we’ve truly heard the self-same considerations being voiced loudly and a number of instances in latest months, as AI hype has surged off the again of expanded entry to generative AI instruments like OpenAI’s ChatGPT and DALL-E — resulting in a surfeit of headline-grabbing dialogue concerning the danger of “superintelligent” killer AIs. (Comparable to this one, from earlier this month, the place statement-signatory Hinton warned of the “existential risk” of AI taking management. Or this one, from simply final week, the place Altman known as for regulation to forestall AI destroying humanity.)

There was additionally the open letter signed by Elon Musk (and scores of others) again in March which known as for a six-month pause on improvement of AI fashions extra highly effective than OpenAI’s GPT-4 to permit time for shared security protocols to be devised and utilized to superior AI — warning over dangers posed by “ever extra highly effective digital minds that nobody — not even their creators — can perceive, predict, or reliably management”.

See also  OpenAI CEO Sam Altman Says His Company Is Now Building GPT-5

So, in latest months, there has truly been a barrage of closely publicized warnings over AI dangers that don’t exist but.

This drumbeat of hysterical headlines has arguably distracted consideration from deeper scrutiny of present harms. Such because the instruments’ free use of copyrighted knowledge to coach AI techniques with out permission or consent (or fee); or the systematic scraping of on-line private knowledge in violation of individuals’s privateness; or the dearth of transparency from AI giants vis-a-vis the information used to coach these instruments. Or, certainly, baked in flaws like disinformation (“hallucination”) and dangers like bias (automated discrimination). To not point out AI-driven spam!

It’s definitely notable that after a gathering final week between the UK prime minister and numerous main AI execs, together with Altman and Hassabis, the federal government seems to be shifting tack on AI regulation — with a sudden eager in existential danger, per the Guardian’s reporting.

Discuss of existential AI danger additionally distracts consideration from issues associated to market construction and dominance, as Jenna Burrell, director of analysis at Information & Society, identified on this latest Columbia Journalism Review article reviewing media protection of ChatGPT — the place she argued we have to transfer away from specializing in purple herrings like AI’s potential “sentience” to protecting how AI is additional concentrating wealth and energy.

So after all there are clear industrial motivates for AI giants to need to route regulatory consideration into the far-flung theoretical future, with speak of an AI-driven doomsday — as a tactic to attract lawmakers’ minds away from extra elementary competitors and antitrust issues within the right here and now. And knowledge exploitation as a software to pay attention market energy is nothing new.

See also  Google DeepMind breaks new ground with 'Mirasol3B' for advanced video analysis

Actually it speaks volumes about present AI energy buildings that tech execs at AI giants together with OpenAI, DeepMind, Stability AI and Anthropic are so joyful to band and chatter collectively in terms of publicly amplifying speak of existential AI danger. And the way way more reticent to get collectively to debate harms their instruments will be seen inflicting proper now.

OpenAI was a notable non-signatory to the aforementioned (Musk signed) open letter however numerous its workers are backing the CAIS-hosted assertion (whereas Musk apparently is just not). So the newest assertion seems to supply an (unofficial) commercially self-serving reply by OpenAI (et al) to Musk’s earlier try to hijack the existential AI danger narrative in his personal pursuits (which now not favor OpenAI main the AI cost).

As an alternative of the assertion calling for a improvement pause, which might danger freezing OpenAI’s lead within the generative AI area, it lobbies policymakers to deal with danger mitigation — doing so whereas OpenAI is concurrently crowdfunding efforts to form “democratic processes for steering AI”, as Altman put it. So the corporate is actively positioning itself (and making use of its buyers’ wealth) to affect the form of any future mitigation guardrails, alongside ongoing in-person lobbying efforts focusing on worldwide regulators.

 

Elsewhere, some signatories of the sooner letter have merely been joyful to double up on one other publicity alternative — inking their identify to each (hello Tristan Harris!).

However who’s CAIS? There’s restricted public details about the group internet hosting this message. Nevertheless it’s definitely concerned in lobbying policymakers, at its personal admission. Its web site says its mission is “to scale back societal-scale dangers from AI” and claims it’s devoted to encouraging analysis and field-building to this finish, together with funding analysis — in addition to having a acknowledged coverage advocacy position.

An FAQ on the web site gives restricted details about who’s financially backing it (saying its funded by non-public donations). Whereas, in reply to an FAQ query asking “is CAIS an impartial group”, it gives a short declare to be “serving the general public curiosity”:

CAIS is a nonprofit group fully supported by non-public contributions. Our insurance policies and analysis instructions should not decided by particular person donors, guaranteeing that our focus stays on serving the general public curiosity.

We’ve reached out to CAIS with questions.

See also  ChatGPT rival Pi launches on Android

In a Twitter thread accompanying the launch of the assertion, CAIS’ director, Dan Hendrycks, expands on the aforementioned assertion explainer — naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “essential and pressing dangers from AI… not simply the danger of extinction”.

“These are all essential dangers that should be addressed,” he additionally suggests, downplaying concerns policymakers have limited bandwidth to deal with AI harms by arguing: “Societies can handle a number of dangers without delay; it’s not ‘both/or’ however ‘sure/and.’ From a danger administration perspective, simply as it will be reckless to solely prioritize current harms, it will even be reckless to disregard them as properly.”

The thread additionally credit David Krueger, an assistant professor of Laptop Science on the College of Cambridge, with developing with the thought to have a single-sentence assertion about AI danger and “collectively” serving to with its improvement.



Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.