Home News MosaicML launches MPT-7B-8K, a 7B-parameter open-source LLM

MosaicML launches MPT-7B-8K, a 7B-parameter open-source LLM

by WeeklyAINews
0 comment

Head over to our on-demand library to view classes from VB Rework 2023. Register Right here


MosaicML has unveiled MPT-7B-8K, an open-source massive language mannequin (LLM) with 7 billion parameters and an 8k context size. 

Based on the corporate, the mannequin is skilled on the MosaicML platform and underwent a pretraining course of commencing from the MPT-7B checkpoint. The pretraining part was performed utilizing Nvidia H100s, with an extra three days of coaching on 256 H100s, incorporating a powerful 500 billion tokens of information.

Beforehand, MosaicML had made waves within the AI group with its launch of MPT-30B, an open-source and commercially licensed decoder-based LLM. The corporate claimed it to be extra highly effective than GPT-3-175B, with solely 17% of GPT-3’s parameters, equal to 30 billion. 

MPT-30B surpassed GPT-3’s efficiency throughout numerous duties and proved extra environment friendly to coach than fashions of comparable sizes. As an illustration, LLaMA-30B required roughly 1.44 occasions extra FLOPs funds than MPT-30B, whereas Falcon-40B had a 1.27 occasions larger FLOPs funds than MPT-30B.

MosaicML claims that the brand new mannequin MPT-7B-8K reveals distinctive proficiency in doc summarization and question-answering duties in comparison with all beforehand launched fashions. 

The corporate mentioned the mannequin is particularly optimized for accelerated coaching and inference for faster outcomes. Furthermore, it permits fine-tuning of domain-specific information inside the MosaicML platform.

The corporate has additionally introduced the supply of commercial-use licensing for MPT-7B-8k, highlighting its distinctive coaching on an intensive dataset comprising 1.5 trillion tokens, surpassing related fashions like XGen, LLaMA, Pythia, OpenLLaMA and StableLM.

See also  Google Launches a new Machine Learning Specialization Course

MosaicML claims that by the usage of FlashAttention and FasterTransformer, the mannequin excels in fast coaching and inference whereas benefiting from the open-source coaching code obtainable by the llm-foundry repository.

The corporate has launched the mannequin in three variations:

  • MPT-7B-8k-Base: This decoder-style transformer is pretrained primarily based on MPT-7B and additional optimized with an prolonged sequence size of 8k. It undergoes further coaching with 500 billion tokens, leading to a considerable corpus of 1.5 trillion tokens encompassing textual content and code.
  • MPT-7B-8k-Instruct: This mannequin is designed for long-form instruction duties, together with summarization and question-answering. It’s crafted by fine-tuning MPT-7B-8k utilizing rigorously curated datasets.
  • MPT-7B-8k-Chat: This variant features as a chatbot-like mannequin, specializing in dialogue technology. It’s created by finetuning MPT-7B-8k with roughly 1.5 billion tokens of chat information.

Mosaic asserts that MPT-7B-8k fashions exhibit comparable or superior efficiency to different at the moment obtainable open-source fashions with an 8k context size, as confirmed by the corporate’s in-context studying evaluation harness.

The announcement coincides with Meta’s unveiling of the LLaMA 2 mannequin, now obtainable on Microsoft Azure. In contrast to LLaMA 1, LLaMA 2 gives numerous mannequin sizes, boasting 7, 13 and 70 billion parameters.

Meta asserts that these pre-trained fashions had been skilled on an unlimited dataset, 40% bigger than that of LLaMA 1, with an expanded context size of two trillion tokens, twice the scale of LLaMA 1. LLaMA 2 outperforms its predecessor in response to Meta’s benchmarks.



Source link

You Might Be Interested In
See also  Arm launches Semiconductor Education Alliance to fight the world's tech talent shortage

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.