Home News OpenAI, DeepMind and Anthropic to give UK early access to foundational models for AI safety research

OpenAI, DeepMind and Anthropic to give UK early access to foundational models for AI safety research

by WeeklyAINews
0 comment

Following the UK authorities’s announcement final week that it plans to host a “international” AI security summit this fall, prime minister Rishi Sunak has kicked off London Tech Week with one other tidbit of reports — telling convention goers that OpenAI, Google DeepMind and Anthropic have dedicated to supply “early or precedence entry” to their AI fashions to assist analysis into analysis and security.

Sunak has had an accelerated conversion to the AI security matter in current weeks, following various interventions by AI giants warning concerning the existential and even extinction-level dangers the expertise may pose if it’s not correctly regulated.

“We’re going to do innovative [AI] security analysis right here within the UK,” pledged Sunak at present. “With £100 million for our knowledgeable taskforce, we’re dedicating extra funding to AI security than every other authorities.

This AI security taskforce will likely be centered on AI basis fashions, the government also said.

“We’re working with the frontier labs — Google DeepMind, OpenAI and Anthropic,” added Sunak. “And I’m happy to announce they’ve dedicated to provide early or precedence entry to fashions for analysis and security functions to assist construct higher evaluations and assist us higher perceive the alternatives and dangers of those methods.”

The PM additionally reiterated his earlier announcement of the forthcoming AI security summit, in search of to liken the hassle to the COP Local weather conferences which purpose to realize international buy-in on tackling local weather change.

See also  How to use large language models and knowledge graphs to manage enterprise data

“Simply as we unite by means of COP to deal with local weather change so the UK will host the primary ever Summit on international AI Security later this 12 months,” he mentioned, including: “I need to make the UK not simply the mental house however the geographical house, of world AI security regulation.”

Evangelizing AI security is a marked change of gears for Sunak’s authorities.

As just lately as March it was in full AI cheerleader mode — saying in a white paper then that it favored “a pro-innovation method to AI regulation”. The method set out within the paper downplayed security considerations by eschewing the necessity for bespoke legal guidelines for synthetic intelligence (or a devoted AI watchdog) in favor of setting out a number of “versatile rules”. The federal government additionally recommended oversight of AI apps ought to be performed by present regulatory our bodies, such because the antitrust watchdog and the info safety authority. 

Quick ahead a number of months and Sunank is now speaking by way of wanting the UK to deal with a world AI security watchdog. Or, at least, that it needs the UK to personal the AI security dialog by dominating analysis into find out how to consider the outputs of studying algorithms.

Speedy developments in generative AI mixed with public pronouncements from a spread of tech giants and AI trade figures warning the tech may spiral uncontrolled seem to have led to a swift technique rethink in Downing Road.

It’s additionally notable that AI giants have been bending Sunak’s ear in individual in current weeks, with conferences happening between the PM and the CEOs of OpenAI, DeepMind and Anthropic shortly earlier than the federal government temper music on AI modified.

See also  SentinelOne unveils cloud security products for Amazon S3, NetApp

If this trio of AI giants sticks to their commitments to supply the UK with superior entry to their fashions there’s a likelihood for the nation to guide on analysis into creating efficient analysis and audit methods — together with earlier than any legislative oversight regimes mandating algorithmic transparency have spun up elsewhere (the European Union’s draft AI Act isn’t anticipated to be in authorized pressure till 2026, for instance; though the EU’s Digital Companies Act is already in pressure and already requires some algorithmic transparency from tech giants).

On the identical time, there’s a threat the UK is making itself susceptible to trade seize of its nascent AI security efforts. And if AI giants get to dominate the dialog round AI security analysis by offering selective entry to their methods they could possibly be effectively positioned to form any future UK AI guidelines that may apply to their companies.

Shut involvement of AI tech giants in publicly funded analysis into the protection of their industrial applied sciences forward of any legally binding AI security framework being utilized to them suggests they are going to at the very least have scope to border how AI security is checked out and which elements, matters and themes get prioritized (and which, due to this fact, get downplayed). And by influencing what sort of analysis occurs since it could be predicated upon how a lot entry they supply.

In the meantime AI ethicists have lengthy warned that headline-grabbing fears concerning the dangers “superintelligent” AIs may pose to people are drowning out dialogue of actual world harms present AI applied sciences are already producing, together with bias and discrimination, privateness abuse, copyright infringement and environmental useful resource exploitation.

See also  What Sarah Silverman's lawsuit against OpenAI and Meta really means | The AI Beat

So whereas the UK authorities might view AI giants’ buy-in as a PR coup, if the AI summit and wider AI security efforts are to supply strong and credible outcomes it should make sure the involvement of unbiased researchers, civil society teams and teams who’re disproportionately prone to hurt from automation, not simply trumpet its plan for a partnership between “good AI firms” and native teachers — given academic research is already often dependent on funding from tech giants.



Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.