Zach Stein-Perlman, 16 February 2023
A high-quality American public survey on AI, Synthetic Intelligence Use Prompts Considerations, was launched yesterday by Monmouth. Some notable outcomes:
- 9% say AI would do extra good than hurt vs 41% extra hurt than good (much like responses to an identical survey in 2015)
- 55% say AI might finally pose an existential menace (up from 44% in 2015)
- 55% favor “having a federal company regulate using synthetic intelligence much like how the FDA regulates the approval of medication and medical gadgets”
- 60% say they’ve “heard about A.I. merchandise – comparable to ChatGPT – that may have conversations with you and write total essays primarily based on just some prompts from people”
Worries about security and help of regulation echoes different surveys:
- 71% of Individuals agree that there must be nationwide laws on AI (Morning Seek the advice of 2017)
- The general public is worried about some AI coverage points, particularly privateness, surveillance, and cyberattacks (GovAI 2019)
- The general public is worried about varied destructive penalties of AI, together with lack of privateness, misuse, and lack of jobs (Stevens / Morning Seek the advice of 2021)
Surveys match the anecdotal proof from speaking to Uber drivers: Individuals are fearful about AI security and would help regulation on AI. Maybe there is a chance to enhance the general public’s beliefs, attitudes, and memes and frames for making sense of AI; maybe higher public opinion would allow higher coverage responses to AI or actions from AI labs or researchers.
Public want for security and regulation is much from ample for an excellent authorities response to AI. However it does imply that the primary problem for enhancing authorities response helps related actors imagine what’s true, growing good affordances for them, and serving to them take good actions— not making folks care sufficient about AI to behave in any respect.