Home News Former White House advisors and tech researchers co-sign new statement against AI harms

Former White House advisors and tech researchers co-sign new statement against AI harms

by WeeklyAINews
0 comment

Be part of high executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for achievement. Be taught Extra


Two former White Home AI coverage advisors, together with over 150 AI lecturers, researchers and coverage practitioners, have signed a brand new “Statement on AI Harms and Policy” revealed by ACM FaaCT (the Convention on Equity, Accountability and Transparency) which is at the moment holding its annual convention in Chicago.

Alondra Nelson, former deputy assistant to President Joe Biden and appearing director on the White Home Workplace of Science and Know-how Coverage, and Suresh Venkatasubramanian, a former White Home advisor for the “Blueprint for an AI Invoice of Rights,” each signed the assertion. It comes just some weeks after a broadly shared Assertion on AI Threat signed by high AI researchers and CEOs cited concern about human “extinction” from AI, and three months after an open letter calling for a six-month AI “pause” on large-scale AI growth past OpenAI’s GPT-4.

In contrast to the earlier petitions, the ACM FaaCT statement focuses on present dangerous impacts of AI techniques and requires a coverage primarily based on present analysis and instruments. It says: “We, the undersigned students and practitioners of the Convention on Equity, Accountability, and Transparency welcome the rising calls to develop and deploy AI in a fashion that protects public pursuits and elementary rights. From the hazards of inaccurate or biased algorithms that deny life-saving healthcare to language fashions exacerbating manipulation and misinformation, our analysis has lengthy anticipated dangerous impacts of AI techniques of all ranges of complexity and functionality. This physique of labor additionally reveals find out how to design, audit, or resist AI techniques to guard democracy, social justice, and human rights. This second requires sound coverage primarily based on the years of analysis that has centered on this subject. We have already got instruments to assist construct a safer technological future, and we name on policymakers to totally deploy them.

See also  Amazon launches its Bedrock generative AI service in general availability

After sharing the assertion on Twitter, Nelson cited the opinion of The AI Policy and Governance and Working Group on the Institute for Superior Examine, the place she at the moment serves as a professor, after stepping down from the Biden Administration in February.

“The AI Coverage and Governance and Working Group, representing completely different sectors, disciplines, views, and approaches, agree that it’s essential and doable to handle the multitude of considerations raised by the increasing use of AI techniques and instruments and their growing energy,” she wrote on Twitter. “We additionally agree that each present-day harms and dangers which have been unattended to and unsure hazards and dangers on the horizon warrant pressing consideration and the general public’s expectation of security.”

Different AI researchers who signed the assertion embody Timnit Gebru, founding father of The Distributed AI Analysis Institute (DAIR), in addition to researchers from Google DeepMind, Microsoft, Stanford College, and UC Berkeley.



Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.