Editor’s Notice: The next is a short letter from Ray Kurzweil, a director of engineering at Google and cofounder and member of the board at Singularity Group, Singularity Hub’s guardian firm, in response to the Way forward for Life Institute’s latest letter, “Pause Large AI Experiments: An Open Letter.”
The FLI letter addresses the dangers of accelerating progress in AI and the following race to commercialize the know-how and requires a pause within the growth of algorithms extra highly effective than OpenAI’s GPT-4, the big language mannequin behind the corporate’s ChatGPT Plus and Microsoft’s Bing chatbot. The FLI letter has hundreds of signatories—together with deep studying pioneer, Yoshua Bengio, College of California Berkeley professor of pc science, Stuart Russell, Stability AI CEO, Emad Mostaque, Elon Musk, and lots of others—and has stirred vigorous debate within the AI group.
Relating to the open letter to “pause” analysis on AI “extra highly effective than GPT-4,” this criterion is just too imprecise to be sensible. And the proposal faces a critical coordination downside: those who comply with a pause could fall far behind companies or nations that disagree. There are large advantages to advancing AI in crucial fields reminiscent of medication and well being, training, pursuit of renewable power sources to switch fossil fuels, and scores of different fields. I didn’t signal, as a result of I consider we will handle the signers’ security considerations in a extra tailor-made means that doesn’t compromise these very important strains of analysis.
I participated within the Asilomar AI Principles Conference in 2017 and was actively concerned within the creation of pointers to create synthetic intelligence in an moral method. So I do know that security is a crucial difficulty. However extra nuance is required if we want to unlock AI’s profound benefits to well being and productiveness whereas avoiding the true perils.
— Ray Kurzweil
Inventor, best-selling creator, and futurist