doi stringlengths 10 10 | chunk-id int64 0 916 | chunk stringlengths 384 2.02k | id stringlengths 12 14 | title stringlengths 8 139 | summary stringlengths 236 1.92k | source stringlengths 31 31 | authors stringclasses 998
values | categories stringclasses 269
values | comment stringclasses 577
values | journal_ref stringclasses 36
values | primary_category stringclasses 31
values | published stringclasses 748
values | updated stringclasses 752
values | references listlengths 0 269 | metadata dict | embeddings sequencelengths 1.54k 1.54k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2407.21783 | 0 | The Llama 3 Herd of Models
Llama Team, AI @ Meta1
1A detailed contributor list can be found in the appendix of this paper.
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a
new set of foundation models, called Llama 3. It is a herd of language models that natively suppo... | 2407.21783#0 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.019791821,
-0.03444159,
0.029808648,
-0.035688918,
-0.0043083807,
-0.01433157,
-0.004995685,
0.05595167,
-0.068068594,
0.020071834,
-0.014178836,
-0.03859087,
-0.025824828,
-0.019257251,
-0.042307407,
-0.027924925,
-0.00565117,
-0.03377974,
-0.018442668,
0.032608777,
-0.010... |
2407.21783 | 1 | resulting models are not yet being broadly released as they are still under development.
Date:July 23, 2024
Website: https://llama.meta.com/
1 Introduction
Foundation models are general models of language, vision, speech, and/or other modalities that are designed
to support a large variety of AI tasks. They form the ba... | 2407.21783#1 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0043875957,
-0.030706856,
0.050908733,
-0.021098336,
-0.014974643,
-0.0049968082,
0.0017787123,
0.027979601,
-0.06338339,
0.045833014,
-0.010833258,
-0.02613618,
-0.039772447,
-0.031211903,
-0.035100766,
-0.017411495,
0.010366089,
-0.028585657,
-0.01999986,
0.03871185,
0.0... |
2407.21783 | 2 | which we will refer to as Llama 3 throughout for brevity.
We believe there are three key levers in the development of high-quality foundation models: data, scale, and
managing complexity. We seek to optimize for these three levers in our development process:
•Data.Compared to prior versions of Llama (Touvron et al., 20... | 2407.21783#2 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.007840018,
0.001403109,
0.05988382,
-0.053595755,
0.020202095,
-0.0028463546,
0.01159948,
0.015211776,
-0.056779925,
0.03954794,
-0.0077396766,
0.0011137909,
-0.06421858,
0.0057495693,
-0.0022861145,
-0.0004302143,
0.03542725,
-0.03210929,
-0.02343978,
0.024938215,
0.00273... |
2407.21783 | 3 | 1arXiv:2407.21783v2 [cs.AI] 15 Aug 2024
Finetuned Multilingual Long context Tool use Release
Llama 3 8B ✗ ✗1✗ ✗ April 2024
Llama 3 8B Instruct ✓ ✗ ✗ ✗ April 2024
Llama 3 70B ✗ ✗1✗ ✗ April 2024
Llama 3 70B Instruct ✓ ✗ ✗ ✗ April 2024
Llama 3.1 8B ✗ ✓ ✓ ✗ July 2024
Llama 3.1 8B Instruct ✓ ✓ ✓ ✓ July 2024
Llama 3.1 70B ... | 2407.21783#3 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.019919509,
-0.027884431,
0.06642717,
-0.04286367,
-0.007280775,
-0.004464966,
0.013574936,
0.0020884518,
-0.045513846,
0.008440225,
-0.011983393,
-0.005217529,
-0.053406753,
0.012833176,
0.008404218,
-0.010067778,
-0.018191135,
-0.0411353,
-0.030563412,
0.03730407,
0.003233... |
2407.21783 | 4 | "size for our training budget, we also train our smaller models for much longer than is compute-opti(...TRUNCATED) | 2407.21783#4 | The Llama 3 Herd of Models | "Modern artificial intelligence (AI) systems are powered by foundation models.\nThis paper presents (...TRUNCATED) | http://arxiv.org/pdf/2407.21783 | "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Ma(...TRUNCATED) | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {"authors":"Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letm(...TRUNCATED) | [0.022203146,-0.029076157,0.05764131,-0.038989644,-0.0020184678,-0.016454345,0.018587789,0.035514813(...TRUNCATED) |
2407.21783 | 5 | "parameters. We evaluate the performance of Llama 3 on a plethora of benchmark datasets that span a (...TRUNCATED) | 2407.21783#5 | The Llama 3 Herd of Models | "Modern artificial intelligence (AI) systems are powered by foundation models.\nThis paper presents (...TRUNCATED) | http://arxiv.org/pdf/2407.21783 | "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Ma(...TRUNCATED) | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {"authors":"Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letm(...TRUNCATED) | [0.026583716,-0.005369328,0.060263183,-0.054586582,0.006348161,-0.017625334,-0.023504669,0.021806758(...TRUNCATED) |
2407.21783 | 6 | "language model and a new version of our Llama Guard model (Inan et al., 2023) for input and output (...TRUNCATED) | 2407.21783#6 | The Llama 3 Herd of Models | "Modern artificial intelligence (AI) systems are powered by foundation models.\nThis paper presents (...TRUNCATED) | http://arxiv.org/pdf/2407.21783 | "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Ma(...TRUNCATED) | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {"authors":"Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letm(...TRUNCATED) | [0.034450162,-0.0018265087,0.057512272,-0.047372226,-0.028496135,-0.0047352724,-0.020488098,0.052494(...TRUNCATED) |
2407.21783 | 7 | "GeneralMMLU (5-shot) 69.4 72.361.1 83.676.970.787.382.6 85.189.1 89.9\nMMLU (0-shot, CoT) 73.0 72.3(...TRUNCATED) | 2407.21783#7 | The Llama 3 Herd of Models | "Modern artificial intelligence (AI) systems are powered by foundation models.\nThis paper presents (...TRUNCATED) | http://arxiv.org/pdf/2407.21783 | "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Ma(...TRUNCATED) | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {"authors":"Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letm(...TRUNCATED) | [0.017833233,0.010421157,0.03100534,-0.0016271535,0.007172382,0.0150085315,0.028468272,0.021889593,0(...TRUNCATED) |
2407.21783 | 8 | "MATH (0-shot, CoT) 51.9 44.313.0 68.054.143.173.841.1 64.5 76.6 71.1\nReasoningARC Challenge (0-sho(...TRUNCATED) | 2407.21783#8 | The Llama 3 Herd of Models | "Modern artificial intelligence (AI) systems are powered by foundation models.\nThis paper presents (...TRUNCATED) | http://arxiv.org/pdf/2407.21783 | "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Ma(...TRUNCATED) | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {"authors":"Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letm(...TRUNCATED) | [-0.0022544828,-0.009589273,0.03295251,-0.0022872963,0.024907403,0.0006060853,-0.01689318,0.03051272(...TRUNCATED) |
2407.21783 | 9 | "NIH/Multi-needle 98.8 ––97.5––98.1 – 100.0 100.0 90.8\nMultilingual MGSM (0-shot, CoT) 68(...TRUNCATED) | 2407.21783#9 | The Llama 3 Herd of Models | "Modern artificial intelligence (AI) systems are powered by foundation models.\nThis paper presents (...TRUNCATED) | http://arxiv.org/pdf/2407.21783 | "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Ma(...TRUNCATED) | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {"authors":"Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letm(...TRUNCATED) | [0.014880117,-0.012521074,0.047880795,-0.050058372,-0.012022046,0.01541155,0.008826968,0.034296855,-(...TRUNCATED) |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 3