Home News How generative AI is accelerating disinformation

How generative AI is accelerating disinformation

by WeeklyAINews
0 comment

Persons are extra conscious of disinformation than they was. Based on one recent poll, 9 out of 10 American adults fact-check their information, and 96% wish to restrict the unfold of false info.

But it surely’s changing into more durable — not simpler — to stem the firehose of disinformation with the arrival of generative AI instruments.

That was the high-level takeaway from the disinformation and AI panel on the AI Stage at TechCrunch Disrupt 2023, which featured Sarah Brandt, the EVP of partnerships at NewsGuard, and Andy Parsons, the senior director of the Content material Authenticity Initiative (CAI) at Adobe. The panelists spoke about the specter of AI-generated disinformation and potential options as an election 12 months looms.

Parsons framed the stakes in pretty stark phrases:

With no core basis and goal reality that we will share, frankly — with out exaggeration — democracy is at stake. Having the ability to have goal conversations with different people about shared reality is at stake.

Each Brandt and Parsons acknowledged that web-borne disinformation, AI-assisted or no, is hardly a brand new phenomenon. Parsons referred to the 2019 viral clip of former Home Speaker Nancy Pelosi (D-CA), which used crude modifying to make it seem as if Pelosi was talking in a slurred, awkward means.

However Brandt additionally famous that — because of AI, significantly generative AI — it’s changing into rather a lot cheaper and easier to generate and distribute disinformation on a large scale.

She cited statistics from her work at NewsGuard, which develops a ranking system for information and data web sites and gives companies resembling misinformation monitoring and model security for advertisers. In Could, NewsGuard identified 49 information and data websites that gave the impression to be virtually fully written by AI instruments. Since then, the corporate has noticed a whole bunch of further unreliable, AI-generated web sites.

See also  Parrot, an AI-powered transcription platform that turns speech into text, raises $11M Series A

“It’s actually a quantity recreation,” Parsons stated. “They’re simply pumping out a whole bunch — in some circumstances, hundreds — or articles a day, and it’s an advert income recreation. In some circumstances, they’re simply attempting to get a whole lot of content material — make it on to search engines like google and yahoo and make some programmatic advert income. And in some circumstances, we’re seeing them unfold misinformation and disinformation.”

And the barrier to entry is reducing.

One other NewsGuard study, revealed in late March, discovered that OpenAI’s flagship text-generating mannequin, GPT-4, is extra prone to unfold misinformation when prompted than its predecessor, GPT-3.5. NewsGuard’s check discovered that GPT-4 was higher at elevating false narratives in additional convincing methods throughout a variety of codecs, together with “information articles, Twitter threads, and TV scripts mimicking Russian and Chinese language state-run media retailers, well being hoax peddlers, and well-known conspiracy theorists.”

So what’s the reply to that dilemma? It’s not instantly clear.

Parsons identified that Adobe, which maintains a household of generative AI merchandise known as Firefly, implements safeguards, like filters, geared toward stopping misuse. And the Content material Authenticity Initiative, which Adobe co-founded in 2019 with the New York Occasions and Twitter, promotes an trade normal for provenance metadata.

However use of the CAI’s normal is totally voluntary, and simply because Adobe is implementing safeguards doesn’t imply others will observe go well with — or that these safeguards can’t or received’t be bypassed.

The panelists floated watermarking as one other helpful measure, albeit not a panacea.

Numerous organizations are exploring watermarking strategies for generative media, together with DeepMind, which not too long ago proposed a regular, SynthID, to mark AI-generated pictures in a means that’s imperceptible to the human eye however may be simply noticed by a specialised detector. French startup Imatag, launched in 2020, presents a watermarking software that it claims isn’t affected by resizing, cropping, modifying or compressing pictures, much like SynthID. Yet one more agency, Steg.AI, employs an AI mannequin to use watermarks that survive resizing and different edits.

See also  DeepMind cofounder is tired of 'knee-jerk bad takes' about AI

Certainly, pointing to among the watermarking efforts and applied sciences in the marketplace right now, Brandt expressed optimism that “financial incentives” will encourage the businesses constructing generative AI instruments to be extra considerate about how they deploy these instruments — and the methods through which they design them to forestall them from being misused.

“With generative AI firms, their content material must be reliable — in any other case, individuals received’t use it,” she stated. “If it continues to hallucinate, if it continues to propagate misinformation, if it continues to not cite sources — that’s going to be much less dependable than no matter generative AI firm is making efforts to guarantee that their content material is dependable.”

I’m not so certain — particularly as extremely succesful, safeguard-free open supply generative AI fashions grow to be broadly obtainable. As with all issues, I suppose, time will inform.

Source link

You Might Be Interested In
See also  Amazon finds itself in the unusual position of playing catch-up in AI

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.