Be a part of high executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for achievement. Learn More
Right now, autonomous cybersecurity vendor SentinelOne introduced the launch of a brand new menace searching platform, which mixes neural networks with a pure language interface based mostly on LLMs together with GPT-4.
The SentinelOne menace searching platform ingests, aggregates and correlates information from endpoint, cloud service and community logs and acts as an automatic assistant that safety analysts can use to ask threat-hunting questions and set off automated response actions.
“We’re not solely permitting you to ask questions, we’re additionally permitting you, by way of a whole pure language interface, [to] invoke actions and automate and orchestrate response in a whole, intuitive method,” stated Tomer Weingarten, CEO of SentinelOne, in an interview with VentureBeat.
As an example, a person can ask the system in pure language to seek out potential profitable phishing makes an attempt involving PowerShell, or to seek out all potential Log4j exploit makes an attempt; obtain a written abstract of this info; and if obligatory, set off an automatic response.
“With this technique, we imagine that you just unlock a lot productiveness that in essence, each safety analyst is now 10 occasions the safety analyst,” Weingarten stated.
SentinelOne’s place within the generative AI safety race
SentinelOne’s announcement, made on the RSA Convention 2023 in San Francisco, got here simply weeks after Microsoft launched a GPT-4-powered AI safety assistant known as Safety Copilot, and fewer than two weeks after menace intelligence supplier Recorded Future introduced the launch of its personal GPT-driven safety answer which may create written menace stories on demand.
Whereas the generative AI safety race is simply starting, with the broader market estimated to develop from $11.3 billion in 2023 to $51.8 billion by 2028, Weingarten argues that the SentinelOne answer’s skill to automate remediation actions differentiates it from opponents like Safety Copilot, which primarily summarizes breach exercise.
“Let’s say you understand somebody despatched a malicious phishing electronic mail, and it arrived on the person inbox and was detected as one thing malicious. Routinely, by advantage of understanding the anomaly in that audit course of execution on the endpoint, from there the system can instantly remediate all the things,” stated Weigngarten.
On this case, the platform might take away information from impaired endpoints and block the sender instantly in actual time, with minimal intervention from a human analyst.