Home News Facebook parent Meta unveils LLaMA 2 open-source AI model for commercial use 

Facebook parent Meta unveils LLaMA 2 open-source AI model for commercial use 

by WeeklyAINews
0 comment

Head over to our on-demand library to view periods from VB Remodel 2023. Register Right here


In a blockbuster announcement as we speak designed to coincide with the Microsoft Encourage convention, Meta introduced its new AI mannequin, LLaMA 2 (Massive Language Mannequin Meta AI). Not solely is that this new giant language mannequin (LLM) now obtainable, it’s additionally open-source and freely obtainable for business use — not like the primary LLaMA, which was licensed just for analysis functions.

The information, coupled with Microsoft’s outspoken assist for LLaMA 2, means the fast-moving world of generative AI has simply shifted but once more. Now the various enterprises dashing to embrace AI, albeit cautiously, have an alternative choice to select from, and this one is completely free — not like chief and rival OpenAI’s ChatGPT Plus, or challengers like Cohere.

Rumors surrounding the brand new launch of LLaMA have been swirling within the business for at the least a month, as U.S senators have been questioning Meta concerning the availability of the AI mannequin.

The primary iteration of LLaMA was obtainable for lecturers and researchers below a analysis license. The mannequin weights underlying LLaMA have been nevertheless leaked, inflicting some controversy resulting in the federal government inquiry. With LLaMA 2, Meta is brushing apart the prior controversy and shifting forward with a extra highly effective mannequin that will likely be extra broadly usable than its predecessor and doubtlessly shake up your complete LLM panorama.

Microsoft hedges its AI bets

The LLaMA 2 mannequin is being made obtainable on Microsoft Azure. That’s noteworthy in that Azure can also be the first dwelling for OpenAI and its GPT-3/GPT-4 household of LLMs. Microsoft is an investor both in Meta’s former company Facebook and in OpenAI.

See also  Giskard’s open-source framework evaluates AI models before they’re pushed into production

Meta founder and CEO Mark Zuckerberg is especially keen about LLaMA being open-source. In a press release, Zuckerberg famous that Meta has a protracted historical past with open supply and has made many notable contributions, significantly in AI with the PyTorch machine studying framework.

“Open supply drives innovation as a result of it permits many extra builders to construct with new expertise,” Zuckerberg said. “It additionally improves security and safety as a result of when software program is open, extra individuals can scrutinize it to determine and repair potential points. I consider it could unlock extra progress if the ecosystem have been extra open, which is why we’re open sourcing Llama 2.”

In a Twitter message, Yann LeCun, VP and chief AI scientist at Meta, additionally heralded the open-source launch.

“That is large: [LLaMA 2] is open supply, with a license that authorizes business use!” LeCun wrote. “That is going to vary the panorama of the LLM market. [LLaMA 2] is offered on Microsoft Azure and will likely be obtainable on AWS, Hugging Face and different suppliers”

What’s inside LLaMA?

LLaMA is a transformer-based auto-regressive language mannequin. The primary iteration of LLaMA was publicly detailed by Meta in February as a 65 billion-parameter mannequin able to a wide selection of widespread generative AI duties.

In distinction, LLaMA 2 has numerous mannequin sizes, together with seven, 13 and 70 billion parameters. Meta claims the pre-trained fashions have been educated on an enormous dataset that was 40% bigger than the one used for LLaMA 1. The context size has additionally been expanded to 2 trillion tokens, double the context size of LLaMA 1.

See also  Google's Bard chatbot finally launches in the EU, now supports more than 40 languages

Not solely has LLaMA been educated on extra knowledge, with extra parameters, the mannequin additionally performs higher than its predecessor, in line with benchmarks offered by Meta.

Security measures touted

LLaMA 2 isn’t all about energy, it’s additionally about security. LLaMA 2 is first pretrained with publicly obtainable knowledge. The mannequin then goes by means of a sequence of supervised fine-tuning (SFT) phases. As a further layer, LLaMA 2 then advantages from a cycle of reinforcement studying from human suggestions (RLHF) to assist present an additional diploma of security and duty.

Meta’s research paper on LLaMA 2 offers exhaustive particulars on the excellent steps taken to assist present security and restrict potential bias as effectively.

“It is very important perceive what’s within the pretraining knowledge each to extend transparency and to make clear root causes of potential downstream points, similar to potential biases,” the paper states. “This may inform what, if any, downstream mitigations to contemplate, and assist information acceptable mannequin use.”



Source link

You Might Be Interested In
See also  Microsoft unveils next-gen AI solutions to boost frontline productivity amid labor challenges

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.