Home Data Security OpenAI announces bug bounty program to address AI security risks

OpenAI announces bug bounty program to address AI security risks

by WeeklyAINews
0 comment

Be part of prime executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for achievement. Learn More


OpenAI, a number one synthetic intelligence (AI) analysis lab, introduced at this time the launch of a bug bounty program to assist handle rising cybersecurity dangers posed by highly effective language fashions like its personal ChatGPT.

This system — run in partnership with the crowdsourced cybersecurity firm Bugcrowd — invitations unbiased researchers to report vulnerabilities in OpenAI’s programs in trade for monetary rewards starting from $200 to $20,000 relying on the severity. OpenAI stated this system is a part of its “dedication to creating secure and superior AI.”

Issues have mounted in latest months over vulnerabilities in AI programs that may generate artificial textual content, pictures and different media. Researchers discovered a 135% improve in AI-enabled social engineering assaults from January to February, coinciding with the adoption of ChatGPT, in response to AI cybersecurity agency DarkTrace.

Whereas OpenAI’s announcement was welcomed by some specialists, others stated a bug bounty program is unlikely to totally handle the big selection of cybersecurity dangers posed by more and more refined AI applied sciences 

This system’s scope is restricted to vulnerabilities that would immediately influence OpenAI’s programs and companions. It doesn’t seem to deal with broader issues over malicious use of such applied sciences like impersonation, artificial media or automated hacking instruments. OpenAI didn’t instantly reply to a request for remark.

A bug bounty program with restricted scope

The bug bounty program comes amid a spate of safety issues, with GPT4 jailbreaks rising, which allow customers to develop directions on easy methods to hack computer systems and researchers discovering workarounds for “non-technical” customers to create malware and phishing emails.

See also  AI in OT: Opportunities and risks you need to know

It additionally comes after a safety researcher generally known as Rez0 allegedly used an exploit to hack ChatGPT’s API and uncover over 80 secret plugins.  

Given these controversies, launching a bug bounty platform supplies a chance for OpenAI to deal with vulnerabilities in its product ecosystem, whereas situating itself as a corporation appearing in good religion to deal with the safety dangers launched by generative AI. 

Sadly, OpenAI’s bug bounty program may be very restricted within the scope of threats it addresses. As an illustration, the bug bounty program’s official page notes: “Points associated to the content material of mannequin prompts and responses are strictly out of scope, and won’t be rewarded except they’ve an extra immediately verifiable safety influence on an in-scope service.”

Examples of questions of safety that are thought of to be out of scope embrace jailbreaks and security bypasses, getting the mannequin to “say dangerous issues,” getting the mannequin to jot down malicious code or getting the mannequin to let you know easy methods to do dangerous issues. 

On this sense, OpenAI’s bug bounty program could also be good for serving to the group to enhance its personal safety posture, however does little to deal with the safety dangers launched by generative AI and GPT-4 for society at massive.  



Source link

You Might Be Interested In
See also  Orca Security expands partnership with Google Cloud to secure enterprise cloud estates

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.