Home News FTC hosts challenge to stop harms of voice cloning AI

FTC hosts challenge to stop harms of voice cloning AI

by WeeklyAINews
0 comment

VentureBeat presents: AI Unleashed – An unique government occasion for enterprise information leaders. Hear from high business leaders on Nov 15. Reserve your free pass


Voice cloning — the apply of mimicking somebody’s voice so properly it could go for the actual factor — has had a banner yr, with a variety of AI startups and methods rising to allow it, and a music going viral that includes voice clones of fashionable music artists Drake and The Weeknd.

However for the Federal Commerce Fee (FTC), the U.S. federal authorities company answerable for investigating and stopping client hurt and promotion truthful market competitors, voice cloning poses a significant danger for client fraud. Think about somebody impersonating your mom’s voice and asking you to shortly wire her $5,000, for instance. And even somebody stealing and utilizing your voice to entry your financial institution accounts by way of a buyer assist hotline.

The FTC is searching for to maneuver shortly (not less than, for a authorities company) to attempt to handle such eventualities. In keeping with a tentative agenda posted by the company forward of its upcoming assembly this Thursday, November 16, the FTC will “announce an exploratory Voice Cloning Problem to encourage the event of multidisciplinary options—from merchandise to procedures—aimed toward defending shoppers from synthetic intelligence-enabled voice cloning harms, similar to fraud and the broader misuse of biometric information and artistic content material. “

In different phrases: the FTC desires technologists and members of the general public to provide you with methods to cease voice clones from tricking individuals. 

The tech is advancing quickly value massive cash

In a single demonstration of voice cloning’s propaganda potential, a filmmaker shocked many by producing a realistic-looking deepfake video depicting First Girl Jill Biden criticizing U.S. coverage in the direction of Palestine. Whereas meant as satire to convey consideration to humanitarian issues, it confirmed how AI might craft a seemingly believable pretend narrative utilizing a synthesized clone of the First Girl’s voice.

See also  What does a Harry Potter fanfic have to do with OpenAI?

The producer was capable of craft the deepfake in only one week utilizing UK-based ElevenLabs, one of many high voice cloning startups on the forefront of this rising sector, based by former employees of controversial army and company intelligence AI startup Palantir. ElevenLabs has gained rising investor curiosity, reportedly in talks to raise $1 billion in a 3rd funding spherical this yr based on sources that spoke to Enterprise Insider.

This fast-tracked progress signifies voice cloning’s rising business prospects, and like AI extra usually, open source solutions are additionally obtainable. 

Nevertheless, sooner development additionally means extra alternatives for dangerous misuse could come up earlier than safeguards can catch up. Regulators purpose to get forward of points by way of proactive efforts just like the FTC’s new problem program.

Voluntary requirements will not be sufficient

On the core of issues is voice cloning’s means to generate seemingly genuine speech from just a few minutes of pattern audio. This raises potentialities for the creation and unfold of pretend audios and movies meant to intentionally deceive or manipulate listeners. Specialists warn of dangers for fraud, deepfakes used to publicly embarrass or falsely implicate targets, and artificial propaganda affecting political processes.

Mitigation has thus far relied on voluntary practices by corporations and advocacy for requirements. However self-regulation will not be sufficient. Challenges just like the FTC’s provide a coordinated, cross-disciplinary avenue to systematically handle vulnerabilities. By means of competitively awarded grants, the problem seeks stakeholder collaboration to develop technical, authorized and coverage options supporting accountability and client safety.

See also  How ChatGPT could replace IT network engineers

Concepts might vary from bettering deepfake detection strategies to establishing provenance and disclosure requirements for artificial media. The ensuing mitigations would information continued protected innovation moderately than stifle progress. With Washington and personal companions working in tandem, complete and balanced options balancing rights and tasks can emerge.

FTC strikes to handle Gen AI harms head on

In keeping with feedback filed to the US Copyright Workplace, the FTC raised cautions in regards to the potential dangers of generative AI getting used improperly or deceiving shoppers.

By expressing wariness over AI programs being skilled on “pirated content material with out consent,” the submitting aligned with debates round whether or not voice cloning instruments adequately get hold of permission when utilizing people’ speech samples. The Voice Cloning Problem might assist the event of greatest practices for responsibly accumulating and dealing with private information.

The FTC additionally warned of client deception dangers if AI impersonates individuals. By means of the problem, the FTC goals to foster the creation of methods to precisely attribute artificial speech and keep away from deceptive deepfakes.

By launching the problem, the FTC seems to hunt to proactively information voice cloning and different generative applied sciences towards options that may mitigate the buyer and competitors issues raised in its copyright submitting. 

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.