Home News The future of AI is unknown. That’s the problem with tech ‘prophets’ influencing AI policy | The AI Beat

The future of AI is unknown. That’s the problem with tech ‘prophets’ influencing AI policy | The AI Beat

by WeeklyAINews
0 comment

Be a part of prime executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for fulfillment. Learn More


The skies above the place I reside close to New York Metropolis have been noticeably apocalyptic final week. However to some in Silicon Valley, the truth that we wimpy East Coasters have been coping with a sepia hue and a scent profile that blended cigar bar, campfire and old-school joyful hour was nothing to fret about. In any case, it’s AI, not local weather change, that seems to be prime of thoughts to this cohort, who consider future superintelligence is both going to kill us all, save us all, or nearly kill us all if we don’t save ourselves first. 

Whether or not they predict the “existential dangers” of runaway AGI that might result in human “extinction” or foretell an AI-powered utopia, this group appears to have equally robust, mounted opinions (for now, anyway — maybe they’re “loosely held”) that simply tip into biblical prophet territory. 

For instance, again in February OpenAI printed a weblog put up known as “Planning for AGI and Past” that some discovered fascinating however others discovered “gross.”

The manifesto-of-sorts appeared comically Previous Testomony-like to me, particularly as OpenAI had simply accepted an estimated $10 billion funding from Microsoft. The weblog put up provided revelations, foretold occasions, warned the world of what’s coming, and offered OpenAI because the reliable savior. The grand message appeared oddly disconnected from its product-focused PR round how instruments like ChatGPT or Microsoft’s Bing may assist in use circumstances like search outcomes or essay writing. In that context, contemplating how AGI may “empower humanity to maximally flourish within the universe” made me giggle.

New AI prophecies preserve coming

However the prophecies preserve coming: Final week, on the identical day New Yorkers viewed the Empire State Constructing choked by smoke, enterprise capitalist Marc Andreessen printed a brand new essay, “Why AI Will Save the World,” wherein he casts himself as a soothsayer, predicting an AI utopia as very best because the Backyard of Eden. 

See also  Phrasee appoints new CEO to head into the future of AI-based marketing

“Happily, I’m right here to carry the excellent news: AI is not going to destroy the world, and in reality could put it aside,” Andreesen wrote. He shortly launched into how that can occur, together with the truth that each little one can have an AI tutor that’s “infinitely affected person, infinitely compassionate, infinitely educated, infinitely useful.” This AI tutor, clearly a far cry from any human trainer who is just not infinitely something, will loyally stay by every little one’s facet all through their growth, he defined, “serving to them maximize their potential with the machine model of infinite love.” AI, he claimed, may flip Earth into an ideal, nurturing womb: “Fairly than making the world harsher and extra mechanistic, infinitely affected person and sympathetic AI will make the world hotter and nicer,” he stated.

Whereas some instantly in contrast Andreesen’s essay to Neal Stephenson’s futuristic novel The Diamond Age, his imaginative and prescient nonetheless jogged my memory of a mystical Promised Land that gives happiness and abundance for all eternity — a much more interesting, though equally unlikely, state of affairs than the one the place humanity is destroyed as a result of a rogue AI leads the world right into a paperclip apocalypse.

Assured AI forecasts will not be details

The issue with all of those assured forecasts is that nobody is aware of the way forward for AI — not to mention how, or when, synthetic basic intelligence will emerge. That may be very completely different than points like local weather change, which has “unequivocal evidence” behind it and arduous knowledge behind charges of change that go far past observing the orange skies over Manhattan.

See also  Martian Lawyers Club raises $2.2M for AI-based game personalization tech

That, in flip, is an issue for societies trying to develop applicable rules to deal with AI dangers. If the tech prophets are those with the facility to affect AI coverage makers, will we find yourself with rules that target an unlikely apocalypse or unicorn-laden utopia, slightly than ones that sort out near-term dangers associated to bias, misinformation, labor shifts and societal disruption?

Are Massive Tech CEOs who’re open about their efforts to construct AGI the proper ones to talk with world leaders about their willingness to deal with AI dangers? Are VCs like Marc Andreessen, who is understood for main the cost in the direction of Web3 and crypto, the proper influencers to corral the general public in the direction of no matter AI future awaits us?

Ought to preppers be main the best way?

In a New York Occasions article yesterday, writer David Sheffield identified that apocalyptic speak is just not new to Silicon Valley, with stocked bunkers a standard possession of many tech executives. In a 2016 article, he identified, Mr. Altman stated he was amassing “weapons, gold, potassium iodide, antibiotics, batteries, water, fuel masks from the Israeli Protection Power and a giant patch of land in Massive Sur I can fly to.”

Now, Sheffield wrote, this group is prepping for the Singularity. “They wish to suppose they’re wise individuals making sage feedback, however they sound extra like monks within the yr 1000 speaking concerning the Rapture,” stated Baldur Bjarnason, writer of “The Intelligence Illusion,” a important examination of AI. “It’s a bit scary,” he stated.

But a few of these are the very leaders main the cost to cope with AI threat and security. For instance, two weeks in the past the UK prime minister, Rishi Sunak, acknowledged the “existential” threat of synthetic intelligence after assembly with the heads of OpenAI, DeepMind and Anthropic — three AI analysis labs with ongoing efforts to develop AGI.

See also  Generative AI Takes Center Stage at the 2023 Ai4 Conference

What’s regarding is that this might result in displaced visibility and assets for researchers engaged on present-day dangers of AI, Sara Hooker, previously of Google Mind and now head of Cohere for AI, informed me not too long ago.

“Whereas it’s good for some individuals within the subject to work on long-term dangers, the quantity of these individuals is at the moment disproportionate to the power to precisely estimate that threat,” she stated. “I want extra of the eye was positioned on the present threat of our fashions which are deployed day by day and utilized by hundreds of thousands of individuals. As a result of for me, that’s what a whole lot of researchers work on day in, time out.”



Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.