Home News Anthropic’s Dario Amodei on AI’s limits: ‘I’m not sure there are any’

Anthropic’s Dario Amodei on AI’s limits: ‘I’m not sure there are any’

by WeeklyAINews
0 comment

As Anthropic takes on OpenAI and different challengers within the rising synthetic intelligence trade, there’s additionally an existential query looming: Can massive language fashions and the techniques they permit proceed rising in measurement and functionality? CEO and co-founder Dario Amodei has a easy reply: sure.

Talking onstage at TechCrunch Disrupt, Amodei defined that he doesn’t see any limitations on the horizon for his firm’s key know-how.

“The final 10 years, there’s been this outstanding improve within the scale that we’ve used to coach neural nets and we hold scaling them up, and so they hold working higher and higher,” he mentioned. “That’s the idea of my feeling that what we’re going to see within the subsequent 2, 3, 4 years… what we see at this time goes to pale compared to that.”

Requested whether or not he thought we’d see a quadrillion-parameter mannequin subsequent yr (rumor has it we’ll see hundred-trillion-parameter fashions this yr), he mentioned that’s outdoors the anticipated scaling legal guidelines, which he described as roughly the sq. of compute. However definitely, he mentioned, we are able to count on fashions to nonetheless develop.

Some researchers have recommended, nevertheless, that irrespective of how massive these transformer-based fashions get, they might nonetheless discover sure duties tough, if not unimaginable. Yejin Choi identified that some LLMs have quite a lot of bother multiplying two three-digit numbers, which suggests a sure incapacity deep within the coronary heart of those in any other case extremely succesful fashions.

“Do you assume that we ought to be making an attempt to determine these form of basic limits?” requested the moderator (myself).

See also  Mistral AI's Latest Mixture of Experts (MoE) 8x7B Model

“Yeah, so I’m unsure there are any,” Amodei responded.

“And in addition, to the extent that there are, I’m unsure that there’s a great way to measure them,” he continued. “I feel these lengthy years of scaling expertise have taught me to be very skeptical, but in addition skeptical of the declare that an LLM can’t do something. Or that if it wasn’t prompted or fine-tuned or skilled in a barely totally different approach, that it wouldn’t wouldn’t have the ability to do something. That’s not a declare that LLMs can do something now, or that they’ll have the ability to do completely something sooner or later sooner or later. I’m simply skeptical of those onerous lists — I’m skeptical of the skeptics.”

On the very least, Amodei recommended, we gained’t see diminishing returns for the following three or 4 years — occasion past which cut-off date you’d want greater than a quadrillion-parameter AI to foretell.

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.