Home Data Security Meet the UC Berkeley professor tracking election deepfakes

Meet the UC Berkeley professor tracking election deepfakes

by WeeklyAINews
0 comment

Not in latest historical past has a know-how come together with the potential to hurt society greater than deepfakes. 

The manipulative, insidious AI-generated content material is already being weaponized in politics and will probably be pervasive within the upcoming U.S. Presidential election, in addition to these within the Senate and the Home of Representatives. 

As regulators grapple to manage the know-how, extremely sensible deepfakes are getting used to smear candidates, sway public opinion and manipulate voter turnout. Alternatively, some candidates, in backfired makes an attempt, have turned to generative AI to assist bolster their campaigns.

College of California, Berkeley’s College of Info Professor Hany Farid has had sufficient of all this. He has launched a challenge devoted to tracking deepfakes all through the 2024 presidential marketing campaign. 

Supply: LinkedIn.

“My hope is that by casting a lightweight on this content material, we elevate consciousness among the many media and public — and we sign to these creating this content material that we’re watching, and we are going to discover you,” Farid advised VentureBeat. 

From Biden in fatigues to DeSantis lamenting difficult Trump

In its most up-to-date entry (Jan. 30), Farid’s website offers three pictures of President Joe Biden in fatigues sitting at what appears to be like to be a army command heart. 

Supply: https://farid.berkeley.edu/deepfakes2024election/

Nonetheless, the publish factors out, “There are tell-tale indicators of misinformed objects on the desk, and our geometric evaluation of the ceiling tiles reveals a bodily inconsistent vanishing level.” 

The “misinformed objects” embrace randomly positioned pc mice and a jumble of indistinguishable gear on the heart. 

The location additionally references the now notorious deepfake robocalls impersonating Biden forward of the New Hampshire major. These urged voters to not take part and mentioned that “Voting this Tuesday solely allows the Republicans of their quest to elect former President Donald Trump once more. Your vote makes a distinction in November, not this Tuesday.” 

See also  Meet Felafax: An AI Startup Building an Open-Source AI Platform for Next-Generation AI Hardware, Reducing Machine Learning ML Training Costs by 30%

It stays unclear who’s behind the calls, however Farid factors out that the standard of the voice is “fairly low” and has an odd-sounding cadence. 

One other publish calls out the “pretty crude mouth movement” and audio high quality in a deepfake of Ron DeSantis saying “I by no means ought to have challenged President Trump, the best president of my lifetime.” 

The location additionally breaks down a six-photo montage of Trump embracing former Chief Medical Advisor Anthony Fauci. These contained bodily inconsistencies akin to a “nonsensical” White Home brand and misshapen stars on the American flag. Moreover, the positioning factors out, the form of Trump’s ear is inconsistent with a number of actual reference pictures. 

Farid famous that “With respect to elections right here within the U.S., it doesn’t take rather a lot to swing a whole nationwide election — 1000’s of votes in a choose variety of counties in a couple of swing states can transfer a whole election.” 

Something might be faux; nothing needs to be actual 

Over latest months, many different widespread deepfakes have depicted Trump being tackled by a half-dozen law enforcement officials; Ukrainian Vladimir Zelenskiy calling for his soldiers to put down their weapons and return to their households; and U.S. Vice President Kamala Harris seemingly rambling and inebriated at an occasion at Howard College. 

The dangerous know-how has additionally been used to tamper with elections in Turkey and Bangladesh — and numerous others to come back — and a few candidates together with Rep. Dean Phillips of Minnesota and Miami Mayor Francis Suarez have used deepfakes to have interaction with voters. 

“I’ve seen for the previous few years an increase within the sophistication of deepfakes and their misuse,” mentioned Farid. “This 12 months looks like a tipping level, the place billions will vote all over the world and the know-how to govern and warp actuality is rising out of its infancy.” 

See also  Meet Superframe, the AI startup that wants to be your copilot for revenue operations

Past their impression on voters, deepfakes can be utilized as shields when persons are recorded breaking the regulation or saying or doing one thing inappropriate. 

“They will deny actuality by claiming it’s faux,” he mentioned, noting that this so-called “Liar’s Dividend” has already been utilized by Trump and Elon Musk. 

“After we enter a world when something be faux,” Farid mentioned, “nothing needs to be actual.”

Cease, assume, test your biases

Analysis has proven that people can solely detect deepfake movies a little bit greater than half the time and phony audio 73% of the time

Deepfakes have gotten ever extra harmful as a result of pictures, audio and video created by AI are more and more sensible, Farid famous. Additionally, doctored supplies are shortly unfold all through social media and may go viral in minutes. 

“A 12 months in the past we noticed primarily image-based deepfakes that have been pretty clearly faux,” mentioned Farid. “Right this moment we’re seeing extra audio/video deepfakes which can be extra subtle and plausible.”

As a result of the know-how is evolving so shortly, it’s tough to name out “particular artifacts” that can proceed to be helpful over time in recognizing deepfakes, Farid famous. 

“My finest recommendation is to cease getting information from social media — this isn’t what it was designed for,” he mentioned. “In case you should spend time on social media, please decelerate, assume earlier than you share/like, test your biases and affirmation bias and perceive that if you share false data, you might be a part of the issue.”

Telltale deepfake indicators to look out for

Others supply extra concrete and particular units for recognizing deepfakes. 

The Northwestern College challenge Detect Fakes, for one, provides a take a look at the place customers can decide their savviness in recognizing phonies. 

The MIT Media Lab, in the meantime, provides a number of suggestions, together with: 

  • Listening to faces, as high-end manipulations are “virtually all the time facial transformations.”
  • Looking for cheeks and foreheads which can be “too easy or too wrinkly,” and have a look at whether or not the “agedness of the pores and skin” is much like that of the hair and eyes,” as deepfakes might be “incongruent on some dimensions.”
  • Noting eyes and eyebrows and shadows that seem the place they shouldn’t be. Deepfakes can’t all the time symbolize pure physics. 
  • whether or not glasses have an excessive amount of glare, none in any respect, or if glare modifications when the individual strikes. 
  • Listening to facial hair (or lack thereof) and whether or not it appears to be like actual. Whereas deepfakes could add or take away mustaches, sideburns or beards, these transformations aren’t all the time absolutely pure. 
  • Take a look at the way in which the individual’s blinking (an excessive amount of or in any respect) and the way in which their lips transfer, as some deepfakes are based mostly on lip-syncing. 
See also  How security access service edge (SASE) can improve performance and security for hybrid workforces

Suppose you’ve noticed a deepfake associated to the U.S. elections? Contact Farid.

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.