Home News Meta quietly releases Llama 2 Long AI model

Meta quietly releases Llama 2 Long AI model

by WeeklyAINews
0 comment

VentureBeat presents: AI Unleashed – An unique govt occasion for enterprise information leaders. Community and be taught with trade friends. Learn More


Meta Platforms confirmed off a bevy of latest AI options for its consumer-facing providers Fb, Instagram and WhatsApp at its annual Meta Join convention at its headquarters in Menlo Park, California, this week.

However the greatest information from Mark Zuckerberg’s firm could have truly come within the type of a pc science paper printed with out fanfare by Meta researchers on the open entry and non-peer reviewed web site arXiv.org.

The paper introduces Llama 2 Lengthy, a brand new AI mannequin primarily based on Meta’s open supply Llama 2 launched in the summertime, however that has undergone “continuous pretraining from Llama 2 with longer coaching sequences and on a dataset the place lengthy texts are upsampled,” in response to the researcher-authors of the paper.

Because of this, Meta’s newly elongated AI mannequin outperforms a number of the main competitors in producing responses to lengthy (increased character depend) consumer prompts, together with OpenAI’s GPT-3.5 Turbo with 16,000-character context window, in addition to Claude 2 with its 100,000-character context window.

How LLama 2 Lengthy got here to be

Meta researchers took the unique Llama 2 out there in its totally different coaching parameter sizes — the values of knowledge and knowledge the algorithm can change by itself because it learns, which within the case of Llama 2 are available in 7 billion, 13 billion, 34 billion, and 70 billion variants — and included extra longer textual content information sources than the unique Llama 2 coaching dataset. One other 400 billion tokens-worth, to be actual.

See also  YOLOv4: A Fast and Efficient Object Detection Model

Then, the researchers saved the unique Llama 2’s structure the identical, and solely made a “crucial modification to the positional encoding that’s essential for the mannequin to attend longer.”

That modification was to the Rotary Positional Embedding (RoPE) encoding, a way of programming the transformer mannequin underlying LLMs comparable to Llama 2 (and LLama 2 Lengthy), which basically maps their token embeddings (the numbers used to signify phrases, ideas, and concepts) onto a 3D graph that reveals their positions relative to different tokens, even when rotated. This enables a mannequin to supply correct and useful responses, with much less data (and thus, much less computing storage taken up) than different approaches.

The Meta researchers “decreased the rotation angle” of its RoPE encoding from Llama 2 to Llama 2 Lengthy, which enabled them to make sure extra “distant tokens,” these occurring extra not often or with fewer different relationships to different items of knowledge, have been nonetheless included within the mannequin’s information base.

Utilizing reinforcement learning from human feedback (RLHF), a typical AI mannequin coaching technique the place AI is rewarded for proper solutions with human oversight to verify it, and artificial information generated by Llama 2 chat itself, the researchers have been in a position to enhance its efficiency in widespread LLM duties together with coding, math, language understanding, widespread sense reasoning, and answering a human consumer’s prompted questions.

Little marvel the open supply AI group

With such spectacular outcomes relative to each Llama 2 common and Anthropic’s Claude 2 and OpenAI’s GPT-3.5 Turbo, it’s little marvel the open-source AI group on Reddit and Twitter and Hacker News have been expressing their admiration and pleasure about Llama 2 for the reason that paper’s launch earlier this week — it’s an enormous validation of Meta’s “open supply” strategy towards generative AI, and signifies that open supply can compete with the closed supply, “pay to play” fashions supplied by well-funded startups.

See also  Databricks and Hugging Face integrate Apache Spark for faster AI model building



Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.