Home News OpenAI announces ‘Preparedness Framework’ to track and mitigate AI risks

OpenAI announces ‘Preparedness Framework’ to track and mitigate AI risks

by WeeklyAINews
0 comment

Are you able to deliver extra consciousness to your model? Take into account changing into a sponsor for The AI Affect Tour. Study extra concerning the alternatives here.


OpenAI, the unreal intelligence lab behind ChatGPT, introduced in the present day its “Preparedness Framework,” a set of processes and instruments to watch and handle the potential risks of more and more highly effective AI fashions.

The announcement comes amid a turbulent interval for the lab, which not too long ago confronted criticism for its dealing with of the firing and rehiring of its chief govt, Sam Altman. The controversy raised questions concerning the lab’s governance and accountability, particularly because it develops a number of the most superior and influential AI programs on the planet.

The Preparedness Framework, in response to a weblog submit by OpenAI, is an try to deal with at the least a few of these considerations and show the lab’s dedication to accountable and moral AI growth. The framework outlines how OpenAI will “monitor, consider, forecast and shield towards catastrophic dangers posed by more and more highly effective fashions,” akin to people who may very well be used for cyberattacks, mass persuasion, or autonomous weapons.

An information-driven method to AI security

One of many key parts of the framework is the usage of threat “scorecards” for AI fashions, which measure and monitor varied indicators of potential hurt, such because the mannequin’s capabilities, vulnerabilities, and impacts. The scorecards are up to date recurrently and set off evaluations and interventions when sure threat thresholds are reached.

credit score: OpenAI

The framework additionally emphasizes the significance of rigorous and data-driven evaluations and forecasts of AI capabilities and dangers, transferring away from hypothetical and speculative eventualities that usually dominate the general public discourse. OpenAI says it’s investing within the design and execution of such assessments, in addition to within the growth of mitigation methods and safeguards.

See also  DARPA launches two-year competition to build AI-powered cyber defenses

The framework is just not a static doc, however a dynamic and evolving one, in response to OpenAI. The lab says it would regularly refine and replace the framework primarily based on new information, suggestions, and analysis, and can share its findings and finest practices with the broader AI group.

A distinction with Anthropic’s coverage

The announcement from OpenAI comes within the wake of a number of main releases targeted on AI security from its chief rival, Anthropic, one other main AI lab that was based by former OpenAI researchers. Anthropic, which is understood for its secretive and selective method, not too long ago revealed its Responsible Scaling Policy, a framework that defines particular AI Security Ranges and corresponding protocols for creating and deploying AI fashions.

The 2 frameworks differ considerably of their construction and methodology. Anthropic’s coverage is extra formal and prescriptive, immediately tying security measures to mannequin capabilities and pausing growth if security can’t be demonstrated. OpenAI’s framework is extra versatile and adaptive, setting common threat thresholds that set off evaluations fairly than predefined ranges.

Specialists say each frameworks have their deserves and downsides, however Anthropic’s method might have an edge by way of incentivizing and implementing security requirements. From our evaluation, it seems Anthropic’s coverage bakes security into the event course of, whereas OpenAI’s framework stays looser and extra discretionary, leaving extra room for human judgment and error.

Some observers additionally see OpenAI enjoying catch-up on security protocols after going through backlash for its fast and aggressive deployment of fashions like GPT-4, essentially the most superior massive language mannequin that may generate reasonable and persuasive textual content. Anthropic’s coverage might have a bonus partly as a result of it was developed proactively fairly than reactively.

See also  ChatGPT prompts: How to optimize for sales, marketing, writing, and more

No matter their variations, each frameworks symbolize a major step ahead for the sphere of AI security, which has typically been overshadowed by the pursuit of AI capabilities. As AI fashions develop into extra highly effective and ubiquitous, collaboration and coordination on security strategies between main labs and stakeholders is now important to make sure the helpful and moral use of AI for humanity.

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.