Dataset Viewer
Auto-converted to Parquet Duplicate
id
int64
0
27
text
stringlengths
635
1.86k
source
sequencelengths
3
3
0
Introducing MPT-7B, the first entry in our MosaicML Foundation Series. MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
1
Our MPT model series is: Licensed for commercial use (unlike LLaMA). Trained on a large amount of data (1T tokens like LLaMA vs. 300B for Pythia, 300B for OpenLLaMA, and 800B for StableLM). Prepared to handle extremely long inputs thanks to ALiBi (we trained on up to 65k inputs and can handle up to 84k vs. 2k-4k for...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
2
MPT-7B-StoryWriter-65k+ MPT-7B-StoryWriter-65k+ is a model designed to read and write stories with super long context lengths. It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the books3 dataset. At inference time, thanks to ALiBi, MPT-7B-StoryWriter-65k+ can extrap...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
3
We hope businesses and the open-source community will build on this effort: alongside the model checkpoints, we have open-sourced the entire codebase for pretraining, finetuning, and evaluating MPT via our new MosaicML LLM Foundry! This release is more than just a model checkpoint: it's an entire framework for buildin...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
4
MPT-7B (Base Model) MPT-7B matches the quality of LLaMA-7B and outperforms other open source 7B - 20B models on standard academic tasks. To evaluate model quality, we compiled 11 open-source benchmarks commonly used for in-context learning (ICL) and formatted and evaluated them in an industry-standard manner. We also ...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
5
To show off this capability and to get you thinking about what you could do with a 65k context window, we are releasing MPT-7B-StoryWriter-65k+. StoryWriter was finetuned from MPT-7B for 2500 steps on 65k-token excerpts of fiction books contained in the books3 corpus. Like pretraining, this finetuning process used a ne...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
6
MPT-7B-Instruct LLM pretraining teaches the model to continue generating text based on the input it was provided. But in practice, we expect LLMs to treat the input as instructions to follow. Instruction finetuning is the process of training LLMs to perform instruction-following in this way. By reducing the reliance o...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
7
MPT-7B-Chat A multi-turn conversation with the chat model in which it suggests high-level approaches to solving a problem (using AI to protect endangered wildlife) and then proposes an implementation of one of them in Python using Keras. We have also developed MPT-7B-Chat, a conversational version of MPT-7B. MPT-7B-C...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
8
Data We wanted MPT-7B to be a high-quality standalone model and a useful jumping off point for diverse downstream uses. Accordingly, our pretraining data came from a MosaicML-curated mix of sources, which we summarize in Table 2 and describe in detail in the Appendix. Text was tokenized using the EleutherAI GPT-NeoX-2...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
9
Obviates the need to download the whole dataset before starting training. Allows instant resumption of training from any point in the dataset. A paused run can be resumed without fast-forwarding the dataloader from the start. Is fully deterministic. Samples are read in the same order regardless of the number of GPUs,...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
10
As another example, to train a new model from scratch on a custom domain (e.g. on biomedical text or code), simply reserve short-term large blocks of compute with MosaicML's hero cluster offering. Just pick the desired model size and token budget, upload your data to an object store like S3, and launch an MCLI job. You...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
11
How did we do this? First, we addressed convergence stability with architecture and optimization improvements. Our MPT models use ALiBi rather than positional embeddings, which we found to improve resilience to loss spikes. We also train our MPT models with the Lion optimizer rather than AdamW, which provides stable up...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
12
Finally, for the best hosting experience, deploy your MPT models directly on MosaicML's Inference service. Start with our managed endpoints for models like MPT-7B-Instruct, and/or deploy your own custom model endpoints for optimal cost and data privacy. What's Next? This MPT-7B release is the culmination of two years...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
13
The most common character must be alphabetic. ≥ 92% of characters must be alphanumeric. If the document is > 500 words, the most common word cannot constitute > 7.5% of the total word count; If the document is ≤ 500 words, the most common word cannot constitute > 30% of the total word count. The document must be ≥ 2...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
14
C++ Common Lisp F-Sharp Fortran Go Haskell Java Ocaml Perl Python Ruby Rust Scala Scheme Shell Tex We chose to have code constitute 10% of the pretraining tokens, as internal experiments showed that we could train on up to 20% code (and 80% natural language) with no negative impact on natural language e...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
15
PIQA: 1838 samples of physical intuitive binary multiple choice questions, e.g. "Question: How can I easily carry clothes on hangers when I move?", "Answer: "Take a couple of empty heavy duty clothes hangers, then hook several hangers of clothes on Those hangers and carry them all at once." COPA: 100 sentences of the ...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
16
Arc-Challenge: 1172 challenging four-choice multiple choice questions about science Arc-Easy: 2376 easy four choice multiple choice science questions HellaSwag: 10042 four choice multiple choice questions in which a real life scenario is presented and the model must choose the most likely conclusion to the scenario. ...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
17
“A Stochastic Parrot, flat design, vector art” — Stable Diffusion XL. Today, Stability AI released a new open source language model, Stable LM. The Alpha version of the model is available in 3 billion and 7 billion parameters, with 15 billion to 65 billion parameter models to follow. Developers can freely inspect, use...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
18
We are also releasing a set of research models that are fine-tuned instructions. Initially, these fine-tuned models will use a combination of five recent open source datasets for conversational agents: Alpaca, GPT4All, Dolly, ShareGPT, and HH. These fine-tuned models are intended for research use only and are released ...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
19
Supportive. We build models to support our users, not replace them. We are focused on efficient, specialized, and practical AI performance – not a quest for god-like intelligence. We develop tools that help everyday people and everyday firms use AI to unlock creativity, boost their productivity, and open up new economi...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
20
by: The Vicuna Team, Mar 30, 2023 We introduce Vicuna-13B, an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90%* quality of OpenAI ChatGPT and Google Bard while outperforming other...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
21
Figure 1. Relative Response Quality Assessed by GPT-4* Online Demo Try the Vicuna-13B demo here! Overview The rapid advancement of large language models (LLMs) has revolutionized chatbot systems, resulting in unprecedented levels of intelligence as seen in OpenAI's ChatGPT. However, despite its impressive performan...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
22
Figure 2. Workflow Overview Figure 2 provides an overview of our work. To begin, we collected around 70K conversations from ShareGPT.com, a website where users can share their ChatGPT conversations. Next, we enhanced the training scripts provided by Alpaca to better handle multi-turn conversations and long sequences. ...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
23
Training Vicuna is created by fine-tuning a LLaMA base model using approximately 70K user-shared conversations gathered from ShareGPT.com with public APIs. To ensure data quality, we convert the HTML back to markdown and filter out some inappropriate or low-quality samples. Additionally, we divide lengthy conversation...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
24
How To Evaluate a Chatbot? Evaluating AI chatbots is a challenging task, as it requires examining language understanding, reasoning, and context awareness. With AI chatbots becoming more advanced, current open benchmarks may no longer suffice. For instance, the evaluation dataset used in Stanford’s Alpaca, self-instru...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
25
Figure 3. Response Comparison Assessed by GPT-4 Figure 3 displays the comparison results between all baselines and Vicuna. GPT-4 prefers Vicuna over state-of-the-art open-source models (LLaMA, Alpaca) in more than 90% of the questions, and it achieves competitive performance against proprietary models (ChatGPT, Bard)....
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
26
Limitations We have noticed that, similar to other large language models, Vicuna has certain limitations. For instance, it is not good at tasks involving reasoning or mathematics, and it may have limitations in accurately identifying itself or ensuring the factual accuracy of its outputs. Additionally, it has not been...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]
27
Students (alphabetical order): Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang (✉), Lianmin Zheng (✉), Siyuan Zhuang, Yonghao Zhuang Advisors (alphabetical order): Joseph E. Gonzalez, Ion Stoica, Eric P. Xing ✉ Correspondence to: Lianmin Zheng (lianminzheng@gmail.com), Hao Zhang (sjtu.haozhang@...
[ "https://www.mosaicml.com/blog/mpt-7b", "https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models", "https://lmsys.org/blog/2023-03-30-vicuna/" ]

No dataset card yet

Downloads last month
6