Home Data Security Why generative AI is a double-edged sword for the cybersecurity sector

Why generative AI is a double-edged sword for the cybersecurity sector

by WeeklyAINews
0 comment

Head over to our on-demand library to view periods from VB Remodel 2023. Register Right here


A lot has been fabricated from the potential for generative AI and huge language fashions (LLMs) to upend the safety trade. On the one hand, the constructive influence is difficult to disregard. These new instruments could possibly assist write and scan code, complement understaffed groups, analyze threats in actual time, and carry out a variety of different capabilities to assist make safety groups extra correct, environment friendly and productive. In time, these instruments can also be capable of take over the mundane and repetitive duties that right this moment’s safety analysts dread, releasing them up for the extra partaking and impactful work that calls for human consideration and decision-making. 

However, generative AI and LLMs are nonetheless of their relative infancy — which implies organizations are nonetheless grappling with methods to use them responsibly. On prime of that, safety professionals aren’t the one ones who acknowledge the potential of generative AI. What’s good for safety professionals is usually good for attackers as effectively, and right this moment’s adversaries are exploring methods to make use of generative AI for their very own nefarious functions. What occurs when one thing we expect helps us begins hurting us? Will we finally attain a tipping level the place the know-how’s potential as a menace eclipses its potential as a useful resource?

Understanding the capabilities of generative AI and methods to use it responsibly will likely be crucial because the know-how grows each extra superior and extra commonplace. 

Utilizing generative AI and LLMs 

It’s no overstatement to say that generative AI fashions like ChatGPT could essentially change the best way we method programming and coding. True, they don’t seem to be able to creating code utterly from scratch (no less than not but). However when you have an thought for an utility or program, there’s an excellent likelihood gen AI can assist you execute it. It’s useful to think about such code as a primary draft. It will not be excellent, however it’s a helpful place to begin. And it’s so much simpler (to not point out sooner) to edit present code than to generate it from scratch. Handing these base-level duties off to a succesful AI means engineers and builders are free to have interaction in duties extra befitting of their expertise and experience. 

See also  5 ways CISOs can prepare for generative AI's security challenges

That being mentioned, gen AI and LLMs create output based mostly on present content material, whether or not that comes from the open web or the precise datasets that they’ve been educated on. Meaning they’re good at iterating on what got here earlier than, which generally is a boon for attackers. For instance, in the identical method that AI can create iterations of content material utilizing the identical set of phrases, it may possibly create malicious code that’s just like one thing that already exists, however totally different sufficient to evade detection. With this know-how, dangerous actors will generate distinctive payloads or assaults designed to evade safety defenses which can be constructed round recognized assault signatures.

A method attackers are already doing that is through the use of AI to develop webshell variants, malicious code used to take care of persistence on compromised servers. Attackers can enter the prevailing webshell right into a generative AI device and ask it to create iterations of the malicious code. These variants can then be used, typically together with a distant code execution vulnerability (RCE), on a compromised server to evade detection. 

LLMs and AI give option to extra zero-day vulnerabilities and complicated exploits 

Nicely-financed attackers are additionally good at studying and scanning supply code to determine exploits, however this course of is time-intensive and requires a excessive stage of talent. LLMs and generative AI instruments can assist such attackers, and even these much less expert, uncover and perform subtle exploits by analyzing the supply code of generally used open-source tasks or by reverse engineering industrial off-the-shelf software program.  

See also  How the rise of generative AI could kill the metaverse -- or save it

Normally, attackers have instruments or plugins written to automate this course of. They’re additionally extra probably to make use of open-source LLMs, as these don’t have the identical safety mechanisms in place to forestall such a malicious habits and are usually free to make use of. The end result will likely be an explosion within the variety of zero-day hacks and different harmful exploits, just like the MOVEit and Log4Shell vulnerabilities that enabled attackers to exfiltrate knowledge from weak organizations. 

Sadly, the common group already has tens and even lots of of 1000’s of unresolved vulnerabilities lurking of their code bases. As programmers introduce AI-generated code with out scanning it for vulnerabilities, we’ll see this quantity rise as a result of poor coding practices. Naturally, nation-state attackers and different superior teams will likely be able to take benefit, and generative AI instruments will make it simpler for them to take action.  

Cautiously transferring ahead 

There aren’t any straightforward options to this downside, however there are steps organizations can take to make sure they’re utilizing these new instruments in a secure and accountable method. A method to try this is to do precisely what attackers are doing: Through the use of AI instruments to scan for potential vulnerabilities of their code bases, organizations can determine probably exploitative elements of their code and remediate them earlier than attackers can strike. That is significantly vital for organizations wanting to make use of gen AI instruments and LLMs to help in code technology. If an AI pulls in open-source code from an present repository, it’s crucial to confirm that it isn’t bringing recognized safety vulnerabilities with it. 

See also  What's new in Gartner's Hype Cycle for data security in 2023

The issues right this moment’s safety professionals have concerning the use and proliferation of generative AI and LLMs are very actual — a reality underscored by a gaggle of tech leaders recently urging an “AI pause” as a result of perceived societal danger. And whereas these instruments have the potential to make engineers and builders considerably extra productive, it’s important that right this moment’s organizations method their use in a rigorously thought of method, implementing the mandatory safeguards earlier than letting AI off its metaphorical leash. 

Peter Klimek is the director of know-how inside the Workplace of the CTO at Imperva.

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.