Home News Good old-fashioned AI remains viable in spite of the rise of LLMs

Good old-fashioned AI remains viable in spite of the rise of LLMs

by WeeklyAINews
0 comment

Bear in mind a yr in the past, all the way in which again to final November earlier than we knew about ChatGPT, when machine studying was all about constructing fashions to resolve for a single activity like mortgage approvals or fraud safety? That method appeared to exit the window with the rise of generalized LLMs, however the truth is generalized fashions aren’t nicely suited to each drawback, and task-based fashions are nonetheless alive and nicely within the enterprise.

These task-based fashions have, up till the rise of LLMs, been the premise for many AI within the enterprise, and so they aren’t going away. It’s what Amazon CTO Werner Vogels known as “good old style AI” in his keynote this week, and in his view, is the form of AI that’s nonetheless fixing lots of real-world issues.

Atul Deo, basic supervisor of Amazon Bedrock, the product launched earlier this yr as a option to plug into a wide range of massive language fashions through APIs, additionally believes that activity fashions aren’t going to easily disappear. As a substitute, they’ve develop into one other AI device within the arsenal.

“Earlier than the arrival of enormous language fashions, we had been principally in a task-specific world. And the thought there was you’ll prepare a mannequin from scratch for a specific activity,” Deo advised TechCrunch. He says the principle distinction between the duty mannequin and the LLM is that one is educated for that particular activity, whereas the opposite can deal with issues exterior the boundaries of the mannequin.

Jon Turow, a accomplice at funding agency Madrona, who previously spent nearly a decade at AWS, says the trade has been speaking about rising capabilities in massive language fashions like reasoning and out-of-domain robustness. “These permit you to have the ability to stretch past a slim definition of what the mannequin was initially anticipated to do,” he mentioned. However, he added, it’s nonetheless very a lot up for debate how far these capabilities can go.

See also  Runway unveils Creative Partners Program

Like Deo, Turow says activity fashions aren’t merely going to abruptly go away. “There may be clearly nonetheless a task for task-specific fashions as a result of they are often smaller, they are often quicker, they are often cheaper and so they can in some instances even be extra performant as a result of they’re designed for a selected activity,” he mentioned.

However the lure of an all-purpose mannequin is difficult to disregard. “While you’re an combination stage in an organization, when there are lots of of machine studying fashions being educated individually, that doesn’t make any sense,” Deo mentioned. “Whereas should you went with a extra succesful massive language mannequin, you get the reusability profit straight away, whereas permitting you to make use of a single mannequin to deal with a bunch of various use instances.”

For Amazon, SageMaker, the corporate’s machine studying operations platform, stays a key product, one that’s geared toward information scientists as an alternative of builders, as Bedrock is. It reports tens of 1000’s of shoppers constructing hundreds of thousands of fashions. It will be foolhardy to provide that up, and admittedly simply because LLMs are the flavour of the second doesn’t imply that the know-how that got here earlier than received’t stay related for a while to return.

Enterprise software program specifically doesn’t work that approach. No one is just tossing their vital funding as a result of a brand new factor got here alongside, even one as highly effective as the present crop of enormous language fashions. It’s value noting that Amazon did announce upgrades to SageMaker this week, aimed squarely at managing massive language fashions.

See also  Teradata deepens Dataiku integration to accelerate enterprise AI projects

Prior to those extra succesful massive language fashions, the duty mannequin was actually the one choice, and that’s how corporations approached it, by constructing a workforce of information scientists to assist develop these fashions. What’s the function of the information scientist within the age of enormous language fashions the place instruments are being geared toward builders? Turow thinks they nonetheless have a key job to do, even in corporations concentrating on LLMs.

“They’re going to assume critically about information, and that’s truly a task that’s rising, not shrinking,” he mentioned. Whatever the mannequin, Turow believes information scientists will assist folks perceive the connection between AI and information inside massive corporations.

“I believe each one in every of us wants to actually assume critically about what AI is and isn’t able to and what information does and doesn’t imply,” he mentioned. And that’s true no matter whether or not you’re constructing a extra generalized massive language mannequin or a activity mannequin.

That’s why these two approaches will proceed to work concurrently for a while to return as a result of typically larger is healthier, and typically it’s not.

Read more about AWS re:Invent 2023 on TechCrunch

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.