Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
Search is not available for this dataset
source
string
id
string
language
string
date
string
author
string
url
string
title
string
extra
string
quality_signals
string
text
string
TheStack
3a7ff00f2df847ac9c37afa81dea24afa8736a61
Assemblycode:Assembly
{"size": 271, "ext": "asm", "max_stars_repo_path": "libsrc/strings/rindex.asm", "max_stars_repo_name": "grancier/z180", "max_stars_repo_stars_event_min_datetime": "2017-01-18T12:02:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-12T09:40:28.000Z", "max_issues_repo_path": "libsrc/strings/rindex.asm", "max_...
{"max_stars_count": 8, "max_issues_count": 1, "max_forks_count": 3, "avg_line_length": 13.55, "max_line_length": 45, "alphanum_fraction": 0.7564575646}
; CALLER linkage for function pointers SECTION code_clib PUBLIC rindex PUBLIC _rindex EXTERN strrchr_callee EXTERN ASMDISP_STRRCHR_CALLEE .rindex ._rindex pop hl pop bc pop de push de push bc push hl jp strrchr_callee + ASMDISP_STRRCHR_CALLEE
TheStack
d76dca06c9026aab14553230eaba741d6c854c9b
Assemblycode:Assembly
{"size": 471, "ext": "asm", "max_stars_repo_path": "oeis/176/A176213.asm", "max_stars_repo_name": "neoneye/loda-programs", "max_stars_repo_stars_event_min_datetime": "2021-08-22T19:44:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T16:47:57.000Z", "max_issues_repo_path": "oeis/176/A176213.asm", "max_is...
{"max_stars_count": 11, "max_issues_count": 9, "max_forks_count": 3, "avg_line_length": 18.84, "max_line_length": 201, "alphanum_fraction": 0.5307855626}
; A176213: Decimal expansion of 2+sqrt(6). ; Submitted by Jon Maiga ; 4,4,4,9,4,8,9,7,4,2,7,8,3,1,7,8,0,9,8,1,9,7,2,8,4,0,7,4,7,0,5,8,9,1,3,9,1,9,6,5,9,4,7,4,8,0,6,5,6,6,7,0,1,2,8,4,3,2,6,9,2,5,6,7,2,5,0,9,6,0,3,7,7,4,5,7,3,1,5,0,2,6,5,3,9,8,5,9,4,3,3,1,0,4,6,4,0,2,3,4 mov $2,4 mov $3,$0 mul $3,4 mov $5,56 lpb $3 ad...
TheStack
900b0d26e496e01e442a6f0421212991ad19fd39
Assemblycode:Assembly
{"size": 794, "ext": "asm", "max_stars_repo_path": "oeis/141/A141974.asm", "max_stars_repo_name": "neoneye/loda-programs", "max_stars_repo_stars_event_min_datetime": "2021-08-22T19:44:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T16:47:57.000Z", "max_issues_repo_path": "oeis/141/A141974.asm", "max_is...
{"max_stars_count": 11, "max_issues_count": 9, "max_forks_count": 3, "avg_line_length": 36.0909090909, "max_line_length": 486, "alphanum_fraction": 0.7166246851}
; A141974: Primes congruent to 23 mod 28. ; Submitted by Jon Maiga ; 23,79,107,163,191,331,359,443,499,751,863,919,947,1031,1087,1171,1283,1367,1423,1451,1619,1759,1787,1871,2011,2039,2179,2207,2347,2459,2543,2683,2711,2767,2851,2879,2963,3019,3187,3271,3299,3467,3607,3691,3719,3803,3943,4027,4111,4139,4363,4391,4447,4...
TheStack
2f27f192ea51831ed5cbaec6059195d389b87cf9
Assemblycode:Assembly
{"size": 3967, "ext": "asm", "max_stars_repo_path": "buildTools/win32-x64/gbdk/libc/reverse.asm", "max_stars_repo_name": "asiekierka/gb-studio", "max_stars_repo_stars_event_min_datetime": "2019-04-17T14:58:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T08:41:22.000Z", "max_issues_repo_path": "buildToo...
{"max_stars_count": 6433, "max_issues_count": 878, "max_forks_count": 440, "avg_line_length": 21.2139037433, "max_line_length": 69, "alphanum_fraction": 0.4373582052}
;-------------------------------------------------------- ; File Created by SDCC : FreeWare ANSI-C Compiler ; Version 2.3.1 Wed Sep 04 21:56:18 2019 ;-------------------------------------------------------- .module reverse ;-------------------------------------------------------- ; Public variables in this module ;...
TheStack
bfebca80ad9e33117cd51d822b17c4b5c5d04c0c
Assemblycode:Assembly
{"size": 269, "ext": "asm", "max_stars_repo_path": "mc-sema/validator/x86_64/tests/FBLD.asm", "max_stars_repo_name": "randolphwong/mcsema", "max_stars_repo_stars_event_min_datetime": "2021-08-07T16:21:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-17T10:58:37.000Z", "max_issues_repo_path": "mc-sema/valid...
{"max_stars_count": 2, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 16.8125, "max_line_length": 30, "alphanum_fraction": 0.7881040892}
BITS 64 ;TEST_FILE_META_BEGIN ;TEST_TYPE=TEST_F ;TEST_IGNOREFLAGS= ;TEST_FILE_META_END ;TEST_BEGIN_RECORDING lea rdi, [rsp-0x10] mov dword [rdi], 0x12345678 mov dword [rdi+04], 0x12345678 mov dword [rdi+08], 0x00001234 FBLD tword [rdi] mov edi, 0 ;TEST_END_RECORDING
TheStack
9086dea7203608990e748357372925eb4a1a50c3
Assemblycode:Assembly
{"size": 237, "ext": "asm", "max_stars_repo_path": "libsrc/gfx/common/cclg.asm", "max_stars_repo_name": "ahjelm/z88dk", "max_stars_repo_stars_event_min_datetime": "2017-01-14T23:33:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T11:28:42.000Z", "max_issues_repo_path": "libsrc/gfx/common/cclg.asm", "max...
{"max_stars_count": 640, "max_issues_count": 1600, "max_forks_count": 215, "avg_line_length": 10.7727272727, "max_line_length": 31, "alphanum_fraction": 0.5358649789}
; ; Colour graphics routines ; ; cls () -- clear screen ; ; Stefano Bodrato - 2018 ; ; ; $Id: cclg.asm $ ; SECTION code_graphics PUBLIC cclg PUBLIC _cclg EXTERN clg .cclg ._cclg jp clg
TheStack
d34da9eeaa98d55ffc4e50b97e9fd9dcce4db933
Assemblycode:Assembly
{"size": 5093, "ext": "asm", "max_stars_repo_path": "Transynther/x86/_processed/NONE/_xt_/i7-7700_9_0x48.log_21829_2908.asm", "max_stars_repo_name": "ljhsiun2/medusa", "max_stars_repo_stars_event_min_datetime": "2020-08-13T19:41:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T12:22:51.000Z", "max_issue...
{"max_stars_count": 9, "max_issues_count": 1, "max_forks_count": 3, "avg_line_length": 43.9051724138, "max_line_length": 2999, "alphanum_fraction": 0.6614961712}
.global s_prepare_buffers s_prepare_buffers: push %r10 push %r13 push %rbp push %rcx push %rdi push %rdx push %rsi lea addresses_UC_ht+0xfc17, %rsi lea addresses_normal_ht+0x16e2f, %rdi nop nop nop nop dec %r10 mov $1, %rcx rep movsw mfence lea addresses_D_ht+0x129ff, %rdx sub $32043, %r13 mov (%rdx), %rcx nop nop nop ...
TheStack
f90db332e1912816e7b2d946adc741dbd4795343
Assemblycode:Assembly
{"size": 1215, "ext": "asm", "max_stars_repo_path": "Testcases/Own Testcases/sum_sq/sum_sq20.asm", "max_stars_repo_name": "swapnil96/Cache_simulator", "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Testcases/Own Testcases/sum_sq/sum_sq20.asm", ...
{"max_stars_count": null, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 36.8181818182, "max_line_length": 73, "alphanum_fraction": 0.4502057613}
lui $1, 0x00000000 ori $t4,$1,0x00000014 # t4(Input register) is initialized to 10. and $t0, $0, $0 # i = 0 lui $1, 0x00001001 # Address of a ori $t1,$1,0x00000010 loop1: sll $t2, $t0, 2 # byte offset for ith element add $t2, $t2, $t1 # address o...
TheStack
0cedde06287f35085eb8846a6b7a35c69fa2ed1f
Assemblycode:Assembly
{"size": 6524, "ext": "asm", "max_stars_repo_path": "src/test/ref/examples/cx16/cx16-rasterbars.asm", "max_stars_repo_name": "jbrandwood/kickc", "max_stars_repo_stars_event_min_datetime": "2022-03-01T02:21:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T04:33:35.000Z", "max_issues_repo_path": "src/test...
{"max_stars_count": 2, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 23.134751773, "max_line_length": 171, "alphanum_fraction": 0.5889025138}
// Example program for the Commander X16 // Displays raster bars in the border .cpu _65c02 // Commodore 64 PRG executable file .file [name="cx16-rasterbars.prg", type="prg", segments="Program"] .segmentdef Program [segments="Basic, Code, Data"] .segmentdef Basic [start=$0801] .segmentdef Code [start=$80d] .segmentdef...
TheStack
8aa31d160c1801f247f038ae3fcdb58140c449c3
Assemblycode:Assembly
{"size": 1039, "ext": "asm", "max_stars_repo_path": "libsrc/_DEVELOPMENT/math/integer/small/l_small_divu_64_64x8.asm", "max_stars_repo_name": "jpoikela/z88dk", "max_stars_repo_stars_event_min_datetime": "2017-01-14T23:33:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T11:28:42.000Z", "max_issues_repo_p...
{"max_stars_count": 640, "max_issues_count": 1600, "max_forks_count": 215, "avg_line_length": 13.6710526316, "max_line_length": 43, "alphanum_fraction": 0.5553416747}
; 2016 aralbrec SECTION code_clib SECTION code_math PUBLIC l_small_divu_64_64x8 PUBLIC l0_small_divu_64_64x8 EXTERN error_llmc, error_divide_by_zero_mc l_small_divu_64_64x8: ; unsigned division of a 64-bit number ; by an 8-bit number ; ; enter : dehl'dehl = 64-bit dividend ; c = 8-b...
TheStack
5e586cf9a8d440f37d3ca53023b0fa6153a62496
Assemblycode:Assembly
{"size": 659, "ext": "asm", "max_stars_repo_path": "oeis/102/A102370.asm", "max_stars_repo_name": "neoneye/loda-programs", "max_stars_repo_stars_event_min_datetime": "2021-08-22T19:44:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T16:47:57.000Z", "max_issues_repo_path": "oeis/102/A102370.asm", "max_is...
{"max_stars_count": 11, "max_issues_count": 9, "max_forks_count": 3, "avg_line_length": 29.9545454545, "max_line_length": 300, "alphanum_fraction": 0.6494688923}
; A102370: "Sloping binary numbers": write numbers in binary under each other (right-justified), read diagonals in upward direction, convert to decimal. ; Submitted by Christian Krause ; 0,3,6,5,4,15,10,9,8,11,14,13,28,23,18,17,16,19,22,21,20,31,26,25,24,27,30,61,44,39,34,33,32,35,38,37,36,47,42,41,40,43,46,45,60,55,50...
TheStack
c5b4688b2e3e8b4ddf027cc1f70fb0b039041469
Assemblycode:Assembly
{"size": 57166, "ext": "asm", "max_stars_repo_path": "grep.asm", "max_stars_repo_name": "Tookerton21/Xv6_OS", "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "grep.asm", "max_issues_repo_name": "Tookerton21/Xv6_OS", "max_issues_repo_issues_event_...
{"max_stars_count": null, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 35.0281862745, "max_line_length": 63, "alphanum_fraction": 0.4238358465}
_grep: file format elf32-i386 Disassembly of section .text: 00000000 <grep>: char buf[1024]; int match(char*, char*); void grep(char *pattern, int fd) { 0: 55 push %ebp 1: 89 e5 mov %esp,%ebp 3: 83 ec 18 sub $0x18,%esp int n, m; char *p, *q; ...
TheStack
512005be8b5d0b3a284ce22254e49f4c0ad7815c
Assemblycode:Assembly
{"size": 561, "ext": "asm", "max_stars_repo_path": "oeis/313/A313778.asm", "max_stars_repo_name": "neoneye/loda-programs", "max_stars_repo_stars_event_min_datetime": "2021-08-22T19:44:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T16:47:57.000Z", "max_issues_repo_path": "oeis/313/A313778.asm", "max_is...
{"max_stars_count": 11, "max_issues_count": 9, "max_forks_count": 3, "avg_line_length": 62.3333333333, "max_line_length": 182, "alphanum_fraction": 0.752228164}
; A313778: Coordination sequence Gal.6.203.6 where G.u.t.v denotes the coordination sequence for a vertex of type v in tiling number t in the Galebach list of u-uniform tilings. ; Submitted by Simon Strandgaard ; 1,5,10,15,21,26,30,34,39,45,50,55,60,65,70,75,81,86,90,94,99,105,110,115,120,125,130,135,141,146,150,154,15...
TheStack
f72e800b2c700b1ab59b99e9bf7b068e6bc7c80a
Assemblycode:Assembly
{"size": 13665, "ext": "asm", "max_stars_repo_path": "crd2consl/crd2consl.asm", "max_stars_repo_name": "s390guy/simh_tests", "max_stars_repo_stars_event_min_datetime": "2021-12-01T01:32:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-01T01:32:22.000Z", "max_issues_repo_path": "crd2consl/crd2consl.asm", "m...
{"max_stars_count": 1, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 51.9581749049, "max_line_length": 81, "alphanum_fraction": 0.68723015}
* Copyright 2021 Harold Grovesteen * * MIT License: * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merg...
TheStack
0ac6e7bb3d515be8f1d37c2103f54f4b4fe038fb
Assemblycode:Assembly
{"size": 382, "ext": "asm", "max_stars_repo_path": "oeis/246/A246146.asm", "max_stars_repo_name": "neoneye/loda-programs", "max_stars_repo_stars_event_min_datetime": "2021-08-22T19:44:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T16:47:57.000Z", "max_issues_repo_path": "oeis/246/A246146.asm", "max_is...
{"max_stars_count": 11, "max_issues_count": 9, "max_forks_count": 3, "avg_line_length": 25.4666666667, "max_line_length": 173, "alphanum_fraction": 0.5811518325}
; A246146: Limiting block extension of A010060 (Thue-Morse sequence) with first term as initial block. ; 0,1,1,0,0,1,1,0,1,0,0,1,0,1,1,0,1,0,0,1,1,0,0,1,0,1,1,0,1,0,0,1,0,1,1,0,0,1,1,0,1,0,0,1,1,0,0,1,0,1,1,0,0,1,1,0,1,0,0,1,0,1,1,0,1,0,0,1,1,0,0,1,0,1,1,0,0,1,1,0,1,0,0,1,1,0 add $0,180 lpb $0 sub $0,1 sub $1,$0 ...
TheStack
f8efa029234348b0453513352a4e4ef983257edb
Assemblycode:Assembly
{"size": 1090, "ext": "asm", "max_stars_repo_path": "micro/assembler/sub_test.asm", "max_stars_repo_name": "andrewparlane/fiuba6633_lab_de_sistemas_digitales", "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "micro/assembler/sub_test.asm", "max_i...
{"max_stars_count": null, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 26.5853658537, "max_line_length": 72, "alphanum_fraction": 0.4311926606}
# ---------------------------------------------------------------------- # Test_name: sub_test # ---------------------------------------------------------------------- # # RAM memory will be structured in the following manner: # # +---------+----------+ # | Address | Variable | # +---------+----------+ # | RAM[00] | A ...
TheStack
c18a95064a8e7b93bacb8d725fecf4dfcbdbe291
Assemblycode:Assembly
{"size": 228, "ext": "asm", "max_stars_repo_path": "wof/lcs/base/1F8.asm", "max_stars_repo_name": "zengfr/arcade_game_romhacking_sourcecode_top_secret_data", "max_stars_repo_stars_event_min_datetime": "2020-10-14T15:29:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-12T18:58:54.000Z", "max_issues_repo_pat...
{"max_stars_count": 6, "max_issues_count": null, "max_forks_count": 1, "avg_line_length": 25.3333333333, "max_line_length": 54, "alphanum_fraction": 0.6929824561}
copyright zengfr site:http://github.com/zengfr/romhack 012A70 clr.b ($1f8,A5) [base+1DC] 012A74 clr.b ($200,A5) 012B0E bne $12b30 01A610 dbra D1, $1a60e copyright zengfr site:http://github.com/zengfr/romhack
TheStack
67d8b2c0ec8fdae8a4677ed22b1d2422be8ad89b
Assemblycode:Assembly
{"size": 502, "ext": "asm", "max_stars_repo_path": "oeis/097/A097075.asm", "max_stars_repo_name": "neoneye/loda-programs", "max_stars_repo_stars_event_min_datetime": "2021-08-22T19:44:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T16:47:57.000Z", "max_issues_repo_path": "oeis/097/A097075.asm", "max_is...
{"max_stars_count": 11, "max_issues_count": 9, "max_forks_count": 3, "avg_line_length": 21.8260869565, "max_line_length": 208, "alphanum_fraction": 0.6533864542}
; A097075: Expansion of (1-x-x^2)/(1-x-3*x^2-x^3). ; Submitted by Jon Maiga ; 1,0,2,3,9,20,50,119,289,696,1682,4059,9801,23660,57122,137903,332929,803760,1940450,4684659,11309769,27304196,65918162,159140519,384199201,927538920,2239277042,5406093003,13051463049,31509019100,76069501250 mov $2,$0 mov $4,2 lpb $4 mov $0...
TheStack
a00904b5f0f8baff56953c73492c8a9399892b56
Assemblycode:Assembly
{"size": 9238, "ext": "asm", "max_stars_repo_path": "src/game_of_life_graphics.asm", "max_stars_repo_name": "Kippiii/Assembly-Game-Of-Life", "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/game_of_life_graphics.asm", "max_issues_repo_name": ...
{"max_stars_count": null, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 22.1534772182, "max_line_length": 152, "alphanum_fraction": 0.6667027495}
; main PROC for Game of Life ; prompts user for input ; displays current board ; IDEAS ; Nicer cursor? ; Random boards include Irvine32.inc include backend.inc .data MAXIMUM_HEIGHT BYTE 255 MAXIMUM_WIDTH BYTE 255 carriage_X_pos BYTE 0 carriage_Y_pos BYTE 0 prev_map DWORD ? world_map DWORD ? board_size DWORD ? cu...
TheStack
f4294ea9bf5123d636a3145fc7e1ae75a65ea05f
Assemblycode:Assembly
{"size": 381, "ext": "asm", "max_stars_repo_path": "libsrc/_DEVELOPMENT/sound/sn76489/c/sccz80/PSGSFXPlayLoop_callee.asm", "max_stars_repo_name": "ahjelm/z88dk", "max_stars_repo_stars_event_min_datetime": "2017-01-14T23:33:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T11:28:42.000Z", "max_issues_repo...
{"max_stars_count": 640, "max_issues_count": 1600, "max_forks_count": 215, "avg_line_length": 15.24, "max_line_length": 55, "alphanum_fraction": 0.8162729659}
; void PSGSFXPlayLoop(void *sfx,unsigned char channels) SECTION code_clib SECTION code_PSGlib PUBLIC PSGSFXPlayLoop_callee EXTERN asm_PSGlib_SFXPlayLoop PSGSFXPlayLoop_callee: pop af pop bc pop de push af jp asm_PSGlib_SFXPlayLoop ; SDCC bridge for Classic IF __CLASSIC PUBLIC _PSGSFXPlayLoop_calle...
TheStack
761b5ecd9784a325fd60dff471b254339fabd126
Assemblycode:Assembly
{"size": 453, "ext": "asm", "max_stars_repo_path": "oeis/296/A296081.asm", "max_stars_repo_name": "neoneye/loda-programs", "max_stars_repo_stars_event_min_datetime": "2021-08-22T19:44:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T16:47:57.000Z", "max_issues_repo_path": "oeis/296/A296081.asm", "max_is...
{"max_stars_count": 11, "max_issues_count": 9, "max_forks_count": 3, "avg_line_length": 25.1666666667, "max_line_length": 201, "alphanum_fraction": 0.5342163355}
; A296081: a(n) = gcd(tau(n)-1, sigma(n)-1), where tau = A000005 and sigma = A000203. ; Submitted by Christian Krause ; 0,1,1,2,1,1,1,1,2,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,2,1,3,5,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,1,3,2,1,1,1,1,7,1,7,1,1,1,1,1,1,1,6,1,1,1,5,1,1,1,1,1,1,1,1,1,1,1,1,4,1,1,1,1,1,1,1,1,1,3,1,1,1,1,1,1,5,5,8 ...
TheStack
57a8c7a10f61f1e915362fc3dc12e408e7aef965
Assemblycode:Assembly
{"size": 1085, "ext": "asm", "max_stars_repo_path": "chap18/ex33/double_div_26.asm", "max_stars_repo_name": "JamesType/optimization-manual", "max_stars_repo_stars_event_min_datetime": "2021-06-08T10:42:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T14:21:45.000Z", "max_issues_repo_path": "chap18/ex33/...
{"max_stars_count": 374, "max_issues_count": 1, "max_forks_count": 39, "avg_line_length": 24.1111111111, "max_line_length": 79, "alphanum_fraction": 0.7373271889}
; ; Copyright (C) 2021 by Intel Corporation ; ; Permission to use, copy, modify, and/or distribute this software for any ; purpose with or without fee is hereby granted. ; ; THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH ; REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHA...
TheStack
75ada34c4e5bb2790373e03a113a8e68c333ca7c
Assemblycode:Assembly
{"size": 720, "ext": "asm", "max_stars_repo_path": "sse/main.asm", "max_stars_repo_name": "DaveAxiom/simd", "max_stars_repo_stars_event_min_datetime": "2020-07-09T04:06:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-07T17:40:46.000Z", "max_issues_repo_path": "sse/main.asm", "max_issues_repo_name": "DaveA...
{"max_stars_count": 2, "max_issues_count": 1, "max_forks_count": 1, "avg_line_length": 18.9473684211, "max_line_length": 73, "alphanum_fraction": 0.5722222222}
%define SYS_EXIT 60 segment .data src: db 1, 15, 1, 15, 1, 15, 1, 15, 1, 15, 1, 15, 1, 15, 1, 15 shift: db 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13 cap: db 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25 cap0: db 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26 ...
TheStack
fb674e1a1c9afb2ea3af6183edf20073e2ce64ac
Assemblycode:Assembly
{"size": 195, "ext": "asm", "max_stars_repo_path": "gfx/pokemon/sentret/anim.asm", "max_stars_repo_name": "pokeachromicdevs/pokeoctober", "max_stars_repo_stars_event_min_datetime": "2021-07-05T23:48:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-05T23:48:37.000Z", "max_issues_repo_path": "gfx/pokemon/sen...
{"max_stars_count": 1, "max_issues_count": 1, "max_forks_count": 1, "avg_line_length": 15.0, "max_line_length": 15, "alphanum_fraction": 0.5025641026}
frame 0, 15 frame 1, 15 frame 2, 15 frame 1, 15 frame 2, 15 frame 1, 15 frame 3, 7 frame 1, 7 frame 3, 7 frame 1, 7 frame 3, 7 frame 0, 15 endanim
TheStack
1f7755ab598ce970f2d484b806c28e8bbd3e9930
Assemblycode:Assembly
{"size": 2636, "ext": "asm", "max_stars_repo_path": "sys.asm", "max_stars_repo_name": "lybrown/edfries-chess-xasm", "max_stars_repo_stars_event_min_datetime": "2021-08-12T08:52:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-12T08:52:52.000Z", "max_issues_repo_path": "sys.asm", "max_issues_repo_name": "ly...
{"max_stars_count": 1, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 40.5538461538, "max_line_length": 65, "alphanum_fraction": 0.502276176}
; SCREEN ROWCRS equ $54 COLCRS equ $55 TEMP1 equ $312+1 ;TEMP LOCATIONS ; IOCB OFFSETS IOCB equ $340 ;I/O CONTROL BLOCKS ICHID equ $340 ;HANDLER INDEX ($FF = FREE) ICDNO equ $341 ;DEVICE NUMBER (DRIVE NUMBER) ICCOM equ $342 ...
TheStack
9d25fbb6d979cbaf26cd495924726880fca6841d
Assemblycode:Assembly
{"size": 387, "ext": "asm", "max_stars_repo_path": "programs/oeis/047/A047599.asm", "max_stars_repo_name": "karttu/loda", "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "programs/oeis/047/A047599.asm", "max_issues_repo_name": "karttu/loda", "max...
{"max_stars_count": null, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 22.7647058824, "max_line_length": 198, "alphanum_fraction": 0.6149870801}
; A047599: Numbers that are congruent to {0, 3, 4, 5} mod 8. ; 0,3,4,5,8,11,12,13,16,19,20,21,24,27,28,29,32,35,36,37,40,43,44,45,48,51,52,53,56,59,60,61,64,67,68,69,72,75,76,77,80,83,84,85,88,91,92,93,96,99,100,101,104,107,108,109,112,115,116,117,120,123,124 add $0,7 mov $1,$0 div $1,4 mul $1,2 mov $2,$0 lpb $2,1 m...
TheStack
1fcc712012fb0238c0e9b515833df948f4b76a83
Assemblycode:Assembly
{"size": 748, "ext": "asm", "max_stars_repo_path": "base/atari/putchar.asm", "max_stars_repo_name": "zbyti/Mad-Pascal", "max_stars_repo_stars_event_min_datetime": "2020-05-02T15:37:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-17T01:59:41.000Z", "max_issues_repo_path": "base/atari/putchar.asm", "max_iss...
{"max_stars_count": 7, "max_issues_count": 14, "max_forks_count": 5, "avg_line_length": 19.6842105263, "max_line_length": 97, "alphanum_fraction": 0.6844919786}
; unit GRAPH: InitGraph, PutPixel, LineTo ; unit S2: SetGraphMode /* PUT CHAR Procedura wyprowadza znak na ekran na pozycji X/Y kursora okreslonej przez zmienne odpowiednio COLCRS ($55-$56) i ROWCRS ($54). Zaklada sie, ze obowiazuja przy tym domyslne ustawienia OS-u, to jest ekran jest w trybie Gr...
TheStack
d3ac540def68d91aaff88dab8bd58061dff2eb7b
Assemblycode:Assembly
{"size": 223, "ext": "asm", "max_stars_repo_path": "Assembler/AssemblyCode/NOT.asm", "max_stars_repo_name": "KPU-RISC/KPU", "max_stars_repo_stars_event_min_datetime": "2017-04-16T16:53:03.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-14T22:29:28.000Z", "max_issues_repo_path": "Assembler/AssemblyCode/TTL/NO...
{"max_stars_count": 8, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 6.3714285714, "max_line_length": 37, "alphanum_fraction": 0.6771300448}
; Write register D to the Output Port MOV D, 11110000b MOV E, 00001111b NOT D NOT D SHR D NOT E NOT E SHL E NOT D NOT D SHR D NOT E NOT E SHL E NOT D NOT D SHR D NOT E NOT E SHL E NOT D NOT D SHR D NOT E NOT E SHL E
TheStack
6c48a7c6898bacfc2853935b4027acb2dce936c3
Assemblycode:Assembly
{"size": 526, "ext": "asm", "max_stars_repo_path": "src/asm/define_music.asm", "max_stars_repo_name": "h1romas4/z88dk-msx-template", "max_stars_repo_stars_event_min_datetime": "2021-09-06T15:25:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T06:38:12.000Z", "max_issues_repo_path": "src/asm/define_music...
{"max_stars_count": 5, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 32.875, "max_line_length": 86, "alphanum_fraction": 0.7908745247}
; license:MIT License ; copyright-holders:Hiromasa Tanaka ; rodata_user ; https://github.com/z88dk/z88dk/blob/master/doc/overview.md#a-quick-note-for-asm-code ; rodata_user if for constant data ; kept in rom if program is in rom SECTION rodata_user PUBLIC _music_title, _music_main, _music_game_over, _sound_extend, _so...
TheStack
b1fa223bb51f0b628f775f515ebb0de3a7e5fc29
Assemblycode:Assembly
{"size": 13807, "ext": "asm", "max_stars_repo_path": "2001-fall/mp4/mp4.asm", "max_stars_repo_name": "ece291/machine-problems", "max_stars_repo_stars_event_min_datetime": "2016-07-16T04:33:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-13T16:18:17.000Z", "max_issues_repo_path": "2001-fall/mp4/mp4.asm", "...
{"max_stars_count": 3, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 26.0018832392, "max_line_length": 200, "alphanum_fraction": 0.5237198522}
; ECE 291 Fall 2001 MP4 ; -- Paint291 -- ; ; Completed By: ; Your Name ; ; Josh Potts ; Guest Author - Ryan Chmiel ; University of Illinois Urbana Champaign ; Dept. of Electrical & Computer Engineering ; ; Ver 1.0 %include "lib291.inc" %include "libmp4.inc" BITS 32 GLOBAL _main ; Define functions and variabl...
TheStack
af9f1c6db7a11c385cb38a878d3339adc633b565
Assemblycode:Assembly
{"size": 241, "ext": "asm", "max_stars_repo_path": "ffight/lcs/boss/9A.asm", "max_stars_repo_name": "zengfr/arcade_game_romhacking_sourcecode_top_secret_data", "max_stars_repo_stars_event_min_datetime": "2020-10-14T15:29:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-12T18:58:54.000Z", "max_issues_repo_p...
{"max_stars_count": 6, "max_issues_count": null, "max_forks_count": 1, "avg_line_length": 30.125, "max_line_length": 54, "alphanum_fraction": 0.6887966805}
copyright zengfr site:http://github.com/zengfr/romhack 03DB86 move.b #$1, ($95,A6) [boss+9A, boss+9C] 03DD02 move.l A0, ($9a,A6) [boss+4] 03DD06 bra $40b46 [boss+9A, boss+9C] copyright zengfr site:http://github.com/zengfr/romhack
TheStack
9ca186f3f118ea2b7ce949d9bcb364e08d61823a
Assemblycode:Assembly
{"size": 8681, "ext": "asm", "max_stars_repo_path": "asm/msvc/ax.asm", "max_stars_repo_name": "awesie/aes_dust", "max_stars_repo_stars_event_min_datetime": "2018-10-01T12:14:08.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-17T02:58:34.000Z", "max_issues_repo_path": "asm/msvc/ax.asm", "max_issues_repo_name"...
{"max_stars_count": 46, "max_issues_count": 1, "max_forks_count": 20, "avg_line_length": 29.3277027027, "max_line_length": 73, "alphanum_fraction": 0.4784011059}
; ; This is free and unencumbered software released into the public domain. ; ; Anyone is free to copy, modify, publish, use, compile, sell, or ; distribute this software, either in source code form or as a compiled ; binary, for any purpose, commercial or non-commercial, and by any ; means. ; In jurisdictions that re...
TheStack
5da987fae3e5ebcef8b516a52a85165c469c7bbe
Assemblycode:Assembly
{"size": 1357, "ext": "asm", "max_stars_repo_path": "libsrc/zx81/tape/tape_load_block_callee.asm", "max_stars_repo_name": "teknoplop/z88dk", "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "libsrc/zx81/tape/tape_load_block_callee.asm", "max_issue...
{"max_stars_count": null, "max_issues_count": null, "max_forks_count": 1, "avg_line_length": 14.4361702128, "max_line_length": 88, "alphanum_fraction": 0.675018423}
; ; Tape load routine ; ; ; int __CALLEE__ tape_load_block_callee(void *addr, size_t len, unsigned char type) ; ; ; $Id: tape_load_block_callee.asm,v 1.6 2015/08/11 07:16:36 stefano Exp $ ; PUBLIC tape_load_block_callee PUBLIC ASMDISP_TAPE_LOAD_BLOCK_CALLEE EXTERN zx_fast EXTERN zx_slow ; Very simple hea...
TheStack
676e0978ab335d8f1859aa7f48a8138e79620249
Assemblycode:Assembly
{"size": 1076, "ext": "asm", "max_stars_repo_path": "programs/oeis/060/A060264.asm", "max_stars_repo_name": "jmorken/loda", "max_stars_repo_stars_event_min_datetime": "2021-03-15T11:38:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-15T11:38:20.000Z", "max_issues_repo_path": "programs/oeis/060/A060264.asm...
{"max_stars_count": 1, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 153.7142857143, "max_line_length": 948, "alphanum_fraction": 0.7267657993}
; A060264: First prime after 2n. ; 2,3,5,7,11,11,13,17,17,19,23,23,29,29,29,31,37,37,37,41,41,43,47,47,53,53,53,59,59,59,61,67,67,67,71,71,73,79,79,79,83,83,89,89,89,97,97,97,97,101,101,103,107,107,109,113,113,127,127,127,127,127,127,127,131,131,137,137,137,139,149,149,149,149,149,151,157,157,157,163,163,163,167,167,17...
TheStack
16f16ba03179ef64e96ec11c8ed6ddf70e4e73aa
Assemblycode:Assembly
{"size": 420, "ext": "asm", "max_stars_repo_path": "programs/oeis/159/A159721.asm", "max_stars_repo_name": "karttu/loda", "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "programs/oeis/159/A159721.asm", "max_issues_repo_name": "karttu/loda", "max...
{"max_stars_count": null, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 42.0, "max_line_length": 238, "alphanum_fraction": 0.8119047619}
; A159721: Number of permutations of 3 indistinguishable copies of 1..n arranged in a circle with exactly 1 local maximum. ; 6,36,192,960,4608,21504,98304,442368,1966080,8650752,37748736,163577856,704643072,3019898880,12884901888,54760833024,231928233984,979252543488,4123168604160,17317308137472,72567767433216,30346520...
TheStack
af42318f112a2de87c03243b68c2f0f99a93c224
Assemblycode:Assembly
{"size": 4826, "ext": "asm", "max_stars_repo_path": "10a-interrupts/boot.asm", "max_stars_repo_name": "starsheriff/train-os", "max_stars_repo_stars_event_min_datetime": "2019-10-23T06:21:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T13:00:02.000Z", "max_issues_repo_path": "10a-interrupts/boot.asm", "...
{"max_stars_count": 3, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 20.8917748918, "max_line_length": 88, "alphanum_fraction": 0.636344799}
; boot.asm ; The label start is our entry point. We have to make it ; public so that the linker can use it. global start extern c_start extern init_idt extern print_interrupt global flush_idt ; we are still in 32-bit protected mode so we have to use ; 32-bit wide instructions bits 32 ; PTE_PRESENT equ 1 << 7 ; Flag...
TheStack
fc275a25bcd0a969f9293a34dc5d2cebdf8410a5
Assemblycode:Assembly
{"size": 747, "ext": "asm", "max_stars_repo_path": "programs/oeis/204/A204267.asm", "max_stars_repo_name": "jmorken/loda", "max_stars_repo_stars_event_min_datetime": "2021-03-15T11:38:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-15T11:38:20.000Z", "max_issues_repo_path": "programs/oeis/204/A204267.asm"...
{"max_stars_count": 1, "max_issues_count": null, "max_forks_count": null, "avg_line_length": 35.5714285714, "max_line_length": 501, "alphanum_fraction": 0.499330656}
; A204267: Symmetric matrix: f(i,j)=(i+j+1 mod 3), by antidiagonals. ; 0,1,1,2,2,2,0,0,0,0,1,1,1,1,1,2,2,2,2,2,2,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,0,0,0,0,0...
TheStack
1d7db4f6c2b073fef562bdaf12842ec62b79e6bd
Assemblycode:Assembly
{"size": 3650, "ext": "asm", "max_stars_repo_path": "agent/io/memory.asm", "max_stars_repo_name": "jephthai/EvilVM", "max_stars_repo_stars_event_min_datetime": "2019-05-18T20:46:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T05:10:43.000Z", "max_issues_repo_path": "agent/io/memory.asm", "max_issues_re...
{"max_stars_count": 141, "max_issues_count": 6, "max_forks_count": 21, "avg_line_length": 23.5483870968, "max_line_length": 84, "alphanum_fraction": 0.7104109589}
;;; memory.asm ;;; ;;; An IO stream layer implemented around shared memory. It should make sense ;;; in both inter-process and same-process shared memory regions. ;;; Put a number on the stack identifying which IO engine this is. Each IO layer ;;; needs to have its own unique ID. This allows payloads to make decisi...
End of preview. Expand in Data Studio

YAML Metadata Warning:The task_categories "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Lucie Training Dataset Card

The Lucie Training Dataset is a curated collection of text data in English, French, German, Spanish and Italian culled from a variety of sources including: web data, video subtitles, academic papers, digital books, newspapers, and magazines, some of which were processed by Optical Character Recognition (OCR). It also contains samples of diverse programming languages.

The Lucie Training Dataset was used to pretrain Lucie-7B, a foundation LLM with strong capabilities in French and English. Code for data preparation can be found in the training respository for Lucie-7B. Due to the licenses of a few subcorpora, the Lucie Training Dataset is released under a CC BY-NC-SA 4.0. A subset available for commercial use will be released soon.

We note that one subcorpus used for training could not be released with the Lucie Training Dataset due to copyright conflicts discovered after training had begun. This data came from the Persée platform. The full list of urls used to create the dataset can be recreated from the file persee_metadata_documents.csv, where the corresponding url is https://www.persee.fr/doc/{ID} for each ID in the column file_id. The file persee_metadata_collections.csv gives statistics on document, word and character counts for the data grouped by collection. In all, the corpus contains a total of 3.25 billion words and 5.75 billion tokens, making up around 0.25% of the raw corpus and 0.37% of the tokens seen during training.

Table of Contents:

Dataset Description

This dataset is intended to provide extensive and diverse multilingual data for training Large Language Models (LLMs). Here are some of the principal features of the corpus:

  • Data mix:
    • The dataset contains more French than English data -- it is in fact one of the biggest collections of French text data that has been preprocessed for LLM training -- with the aim of minimizing anglo-centric cultural biases.
    • German, Spanish and Italian are also represented in small amounts.
    • Code is included to boost the reasoning capabilities of LLMs.
  • Data filtering and deduplication:
    • The dataset has been cleaned in an effort to remove very low-quality data.
    • Duplicate data samples have been removed to some extent, following best practices.
    • Web data has been filtered to minimize potentially toxic content and personally identifying information.
  • Ethics:
    • Special care has been taken to respect copyright laws and individual privacy. All newspapers, monographies, magazines and legislative documents, as well as most books, are in the public domain (which depends on the author's date of death and the country of publication). Other data are published with permissive licenses (e.g., CC BY or CC BY-SA) or, in very rare cases, CC BY-NC-SA.
    • All web data in the dataset come from sites with robots.txt files that do not forbid crawling.

Sample Metadata

In addition to the text field, which provides the content of the sample, each training sample in the corpus contains the following metadata when available:

  • language: the language of the text sample (note that this information is taken from the original data source and may be incorrect).
    Possible values:
    • the ISO 639-1 code for a given natural language ("en", "fr", "de", "es", or "it"),
    • the name of a programming language prefixed by "code:" ("code:python", "code:c++", …), or
    • a list of ISO 639-1 codes separated by commas for data containing parallel translations ("fr,en", "de,fr", "es,en", "it,en", or one of those pairs in the opposite order if the languages appear in the opposite order in the text).
  • source: an identifier for the source(s) of the text sample (Wikipedia, RedPajama, Gutenberg, …). All sources are described in detail below.
  • id: an identifier that is unique among documents from the same source.
  • url (optional): the URL of the original text sample on the web, if available.
  • title (optional): the title of the original text sample, if available.
  • author (optional): the author of the original text sample, if available.
    Note: The author name is given in plain text, except in the case of Gutenberg books, where it is the JSON serialized object of the author metadata.
  • date (optional): the publication date of the original text sample, if available.
    Note: The text format of the date depends on the source.
  • quality_signals (optional): a list of quality signals for the text sample in JSON format (which could be used for further filtering or sample weighting).
    Note: It can include indicators computed by `fasttext` and `CCNet`, statistics about occurrences of characters, words, special characters, etc.
  • extra (optional): extra information about the text sample, in JSON format. This can include metadata about the source subset, the rights, etc.

The list of metadata available for each source is provided (without the text field) in metadata_examples.json.

Dataset Composition

The following figure shows the distribution of the dataset by language (colors) and category (hatch patterns).

Dataset composition

The following table provides an overview of the dataset composition, broken down by source and language. Sources are grouped by category. The table provides the numbers of documents, words, tokens, and characters for each subset. All numbers in this table are available in the CSV file dataset_composition.csv. Token counts are computed using the tokenizer for Lucie-7B.

Subset Language M docs B words B tokens B chars
TOTAL 2186.562 1356.021 2314.862 8842.200
French (fr) 653.812 583.687 928.618 3619.672 composition details
English (en) 554.289 412.202 611.894 2553.541 composition details
code 125.769 51.306 228.954 630.749 composition details
German (de) 165.915 105.609 206.610 764.779 composition details
Spanish (es) 171.651 123.857 200.825 759.457 composition details
Italian (it) 99.440 62.051 112.031 404.454 composition details
fr-en 410.032 17.016 25.494 107.658 composition details
it-en 1.901 0.100 0.151 0.638
es-en 1.961 0.103 0.143 0.631
de-fr 1.792 0.0908 0.141 0.621

Category: Web

RedPajama French (fr) 640.770 477.758 741.023 2974.596 composition details
German (de) 162.779 103.078 201.371 747.631 composition details
Spanish (es) 169.447 121.751 197.125 746.984 composition details
Italian (it) 97.324 60.194 108.416 393.012 composition details
FineWebEdu English (en) 421.209 327.453 467.837 2018.215 composition details

Category: Newspaper

GallicaPress French (fr) 3.205 67.496 121.606 408.882
AmericanStories English (en) 59.420 8.902 14.313 50.844 composition details

Category: Technical

PeS2o English (en) 38.972 42.296 65.365 268.963
HAL French (fr) 0.349 9.356 16.224 58.308
Theses French (fr) 0.102 7.547 14.060 47.758
Pile (USPTO_Backgrounds) English (en) 5.139 3.492 5.105 22.309
OpenEdition French (fr) 0.939 2.225 3.604 14.459
Pile (PhilPapers) English (en) 0.0308 0.363 0.618 2.304
Pile (NIH_ExPorter) English (en) 0.914 0.288 0.431 1.979

Category: Book

GallicaMonographies French (fr) 0.278 15.106 25.169 90.456
Gutenberg English (en) 0.0563 3.544 5.516 20.579
French (fr) 0.00345 0.227 0.383 1.392
German (de) 0.00188 0.0987 0.193 0.654
Italian (it) 0.000958 0.0657 0.129 0.414
Spanish (es) 0.000735 0.0512 0.0920 0.303

Category: Legislative Texts

Pile (FreeLaw) English (en) 3.415 8.204 14.011 52.580
Eurovoc English (en) 0.272 1.523 2.571 9.468
Italian (it) 0.245 0.731 1.527 4.867
German (de) 0.247 0.678 1.497 4.915
Spanish (es) 0.246 0.757 1.411 4.684
OpenData French (fr) 1.169 0.755 1.209 4.638
QuestionsEcritesParlement French (fr) 0.189 0.108 0.156 0.705
LEGI French (fr) 0.621 0.0878 0.145 0.563
AmendementsParlement French (fr) 0.673 0.0452 0.0738 0.274

Category: Legislative Transcripts

Europarl German (de) 0.0102 0.0451 0.0734 0.327
Spanish (es) 0.0103 0.0524 0.0733 0.325
French (fr) 0.0103 0.0528 0.0717 0.339
English (en) 0.0111 0.0563 0.0690 0.339
DiscoursPublics French (fr) 0.110 0.163 0.238 1.025
InterventionsParlement French (fr) 1.832 0.104 0.157 0.654

Category: Wiki

Wikipedia English (en) 6.893 4.708 7.898 26.616
German (de) 2.877 1.709 3.476 11.252
French (fr) 2.648 1.726 2.940 9.879
Spanish (es) 1.947 1.245 2.124 7.161
Italian (it) 1.870 1.060 1.959 6.161
wikisource French (fr) 0.186 0.523 0.795 3.080
wiktionary French (fr) 0.650 0.0531 0.117 0.347

Category: Math

MathPile English (en) 0.737 3.408 9.637 27.290
Pile (DM_Mathematics) English (en) 0.992 1.746 4.928 8.127

Category: Forum

Pile (StackExchange) English (en) 15.269 4.534 10.275 33.609
Pile (Ubuntu_IRC) English (en) 0.0104 0.867 2.159 5.610

Category: Dialogue

Claire English (en) 0.949 0.818 1.161 4.709 composition details
French (fr) 0.0393 0.210 0.311 1.314 composition details
YouTube French (fr) 0.0375 0.145 0.336 1.003
STAC English (en) 0.0000450 0.0000529 0.000121 0.000327

Category: Multilingual Parallel Corpora

CroissantAligned fr-en 408.029 16.911 25.351 107.003
EuroparlAligned it-en 1.901 0.100 0.151 0.638
fr-en 2.003 0.105 0.143 0.655
es-en 1.961 0.103 0.143 0.631
de-fr 1.792 0.0908 0.141 0.621

Category: Programming

TheStack JAVASCRIPT 21.109 8.526 58.609 141.647
JAVA 20.152 7.421 27.680 89.297
C 8.626 5.916 24.092 57.428
PHP 15.905 4.865 22.883 66.844
PYTHON 12.962 5.434 21.683 64.304
C++ 6.378 4.584 18.835 50.892
C# 10.839 3.574 13.381 46.286
GO 4.730 2.735 10.262 25.738
TYPESCRIPT 10.637 2.617 9.836 28.815
RUST 1.387 0.872 3.241 9.529
RUBY 3.405 0.646 2.392 7.139
SWIFT 1.756 0.553 1.876 6.134
KOTLIN 2.243 0.454 1.758 5.769
SCALA 1.362 0.457 1.587 4.862
TEX 0.398 0.394 1.507 3.805
LUA 0.559 0.318 1.367 3.279
DART 0.933 0.308 1.242 3.864
PERL 0.392 0.297 1.149 2.634
MATHEMATICA 0.0269 0.120 1.117 1.720
ASSEMBLY 0.248 0.209 0.867 1.575
HASKELL 0.545 0.307 0.807 2.364
FORTRAN 0.165 0.192 0.780 1.843
JULIA 0.299 0.152 0.660 1.539
OCAML 0.160 0.130 0.430 1.107
ERLANG 0.0994 0.0657 0.260 0.726
ELIXIR 0.282 0.0731 0.258 0.737
CLOJURE 0.126 0.0448 0.179 0.492
R 0.0392 0.0278 0.158 0.305
MATLAB 0.000967 0.00865 0.0427 0.0372
RACKET 0.00420 0.00479 0.0153 0.0378

Configurable Subsets and Versions

As the Lucie Training Dataset is a collection of multilingual corpora from different sources, it can be divided into subsets based on the source and language of its constituent corpora.
The list of possible configurations is available in the YAML header of this README file. Each configuration corresponds to a pathname pattern in the data subdirectory.

The dataset is also available in the following versions:

  • v1.1 / main (default): The data used for the first (main) pretraining phase of Lucie-7B, which contains approximately 2.3T tokens. The statistics above apply to this version.
  • v1.2: An improved version of the main dataset, where
    • GallicaMonographies and GallicaPress have been filtered aggressively to remove documents with low OCR quality. After filtering, GallicaMonographies contains around 220,000 documents and 20.131 billion tokens. For GallicaPress, we first selected a subset of the original corpus that contained only html documents (as opposed to documents in .txt format). This subset contained 1,747,600 documents and 74 billion tokens. After filtering, this subset contains roughly 989,100 documents and 45.7 billion tokens.
    • The Ubuntu_IRC and PhilPapers subsets of the Pile have been refined by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian. After filtering, Ubuntu_IRC contains about 9,000 documents and 1.745 billion tokens. PhilPapers contains around 28,000 million documents and 502 million tokens.
  • v1.2-recent-web : The data used for the second pretraining phase (context extension) of Lucie-7B. This version is identical to v1.2 with the exception that older snapshots of web data (before 2023 for RedPajama and before 2024 for FineWebEdu) have been excluded. All data from v1.1 that were not filtered out remain unchanged in v1.2 and v1.2-recent-web.

Except from v1.1, which is a git tag, all versions are git branches in the dataset repository (e.g. v1.2).

The Example use in Python section contains example Python code for loading and iterating over the dataset with different configurations, including source, language and version.

Details on Data Sources

AmendementsParlement

  • Source: Corpus contributed by OpenLLM partners.
  • Extracted from: Regards citoyens. License: CC BY-SA.
  • Description: A collection of proposed amendments by the French parliament. Documents contain the text of the proposed amendment, the name of the associated law as well as information on who voted on the amendment and what was decided.

AmericanStories

  • Source: dell-research-harvard/AmericanStories. License: CC BY 4.0.
  • Extracted from: Chronicling America. License: Open.
  • Description: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the dataset card). See the dataset composition details for statistics on documents by year. Dataset containing text retrieved through OCR.
  • Pre-processing:
    • Filtering: To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 2310, measured using a CCNET model in English (see code details). The code to compute CCNET perplexity, parallelizing on parquet files, is available here.
  • Citation: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," arxiv:2308.12477.

Claire (French and English)

  • Sources:
  • Extracted from: see the datacards for the French and English datasets.
  • Description: The Claire datasets are composed of transcripts of spoken conversations -- including parliamentary proceedings, interviews, debates, meetings, and free conversations -- as well as some written conversations from theater plays and written chats. The dataset is designed to help downstream performance of models fine-tuned for tasks requiring the comprehension of spontaneous spoken conversation, such as meeting summarization. Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker or a unique identifier. See the composition details for the French dataset and the English dataset for a high-level view of the distribution of different types of documents in each dataset.
  • Citation: Julie Hunter, Jérôme Louradour, Virgile Rennard, Ismaïl Harrando, Guokan Shang, Jean-Pierre Lorré (2023). The Claire French Dialogue Dataset. arXiv:2311.16840.

CroissantAligned

  • Source: croissantllm/croissant_dataset_no_web_data (subset: aligned_36b). License: not specified.
  • Extracted from:
    • Translation pairs: OPUS (99.6% of the data in CroissantAligned). Pairs extracted from OPUS are labeled as "UnbabelFrEn".
    • Thesis abstracts: French thesis abstract pairs. License: ETALAB-Licence-Ouverte-v2.0.
    • Song lyrics: lacoccinelle.
  • Description: CroissantAligned contains samples of parallel French/English (or English/French) data. Data extracted from OPUS takes the form of sentences pairs, where one sentence is in French and the other is in English. OPUS pairs were passed through a custom pipeline designed to select the highest quality translation examples. Selected pairs are labeled "UnbabelFrEn" in the CroissantAligned dataset. The thesis abstract subset contains thesis abstracts paired with translations written by the thesis authors. The song lyrics are translated by contributors to www.lacoccinelle.net. Parallel data are used to boost the multilingual capabilities of models trained on them (Faysse et al.,2024).
  • Pre-processing:
    • Language separation and tagging: The original text field of the Croissant dataset contains a sentence or passage in French or English immediately followed by its translation without any indication of which passage is in which language. The first step was thus to split each text into separate, monolingual passages and tag each passage with the appropriate language code, identified automatically using the langid library (see code details). In the Lucie Training Dataset, the extra metadata field for CroissantAligned contains separate keys, text_fr for French and text_en for English, that stores the texts separately.
    • Random combination of texts prefixed by language: To create the text values, each monolingual text was repaired with its translation, but random separators and various methods of prefixing the text with the language (name or code) were added. This was done as a precaution to prevent models trained on this data from switching languages when generating text and can be seen as a very basic instruction to translate the source (first) text into the target (second) text (see code details).
  • Citation: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," arXiv:2402.00786.

DiscoursPublics

  • Source: Corpus contributed by OpenLLM partners.
  • Extracted from: Vie Publique. License: ETALAB-Licence-Ouverte-v2.0.
  • Description: A collection of public speeches from the principal public actors in France including speeches from the French President starting from 1974 and from the Prime Minister and members of the government starting from 1980.
  • Pre-processing:
    • Text cleaning: the mention of the source url and the number of views were removed from the text.

Europarl and EuroparlAligned

  • Sources:
  • Description: "The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. The goal of the extraction and processing was to generate sentence aligned text for statistical machine translation systems" (www.statmt.org).
  • Pre-processing:
    • Random combination of aligned texts prefixed by language: The same process as used for the CroissantAligned dataset was applied to the EuroparlAligned dataset (see code details). In the Lucie Training Dataset, the extra field in the metadata for EuroparlAligned provides texts in the two languages under the sub-fields text_1 and text_2, and the corresponding language codes under lang_1 and lang_2.
  • Citation: Philipp Koehn (2005). "Europarl: A Parallel Corpus for Statistical Machine Translation," MT Summit.

Eurovoc

  • Source: EuropeanParliament/Eurovoc. License: EUPL 1.1.
  • Extracted from: Cellar. License: CC BY-4.0.
  • Description: A collection of mutlilingual documents from the data repository of the Publications Office of the European Union annotated with Eurovoc labels. The corpus contains legal, policy-related, historical and organizational information about the EU. Dataset containing text retrieved through OCR.
  • Pre-processing:
    • Filtering: To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 1500, measured using a CCNET model on the target language (see code details). The code to compute CCNET perplexity, parallelizing on parquet files, is available here.
    • Text cleaning: Mentions of Credit Institutions Directives (CID) that appears in the raw texts such as (cid:146) were removed.
  • Citations:

FineWebEdu

  • Source: HuggingFaceFW/fineweb-edu. License: ODC-BY.
  • Extracted from: FineWeb. License: ODC-BY.
  • Description: A 1.3 trillion token selection from FineWeb, which contains 15 trillion tokens of curated data from 96 Common Crawl dumps. Content in FineWebEdu has been selected by a custom designed classifier for its high-quality, educational content. Most recent crawl: 2024-10 (see composition details for information about the crawls included in this dataset.)
  • Pre-processing:
    • Removing duplicate urls: urls were removed if their base domain overlapped with a dataset already in the Lucie Training Dataset (e.g., "philpapers.org") in order to increase diversity of content (see code details)
    • Filtering by robots.txt files: we collect robots.txt and remove all documents for which CCBot is disallowed or for which we failed to collect information as of July 2024 in an effort to select data free from opt-out evidence according to the 4th article of the copyright European directive (2019).
  • Citation: Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf (2024). "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale," arXiv:2406.17557.

GallicaMonographies

  • Source: Corpus contributed by OpenLLM partners. A version is also published here: PleIAs/French-PD-Books. License: Public domain.
  • Extracted from: Gallicagram.
  • Description: A large collection of French monographies in the public domain made available through the French National Library (Gallica). Dataset containing text retrieved through OCR.
  • Pre-processing:
    • Text cleaning for v1.1: To filter out documents with excessive OCR errors, the dataset was split into chunks and chunks were kept if the source language was detected as French by FastText with a confidence score of 0.65 or above, and the perplexity score, as measured using a CCNET model in French, was between 10 and 1000. The code to compute CCNET perplexity, parallelizing on parquet files, is available here.
    • Filtering for v1.2: Using OCR scores provided in the metadata of the source corpus, documents with an OCR score of less than 90 out of 100 were filtered out.

GallicaPress

  • Source: Corpus contributed by OpenLLM partners. A version is also published here: PleIAs/French-PD-Newspapers. License: Public domain.
  • Extracted from: Gallicagram.
  • Description: A large collection of French newspapers and periodicals in the public domain made available through the French National Library (Gallica). Dataset containing text retrieved through OCR.
  • Pre-processing:
    • Text cleaning for v1.1: To filter out documents with excessive OCR errors, the dataset was split into chunks and chunks were kept if the source language was detected as French by FastText with a confidence score of 0.65 or above, and the perplexity score, as measured using a CCNET model in French, was between 10 and 1000 (see code details). The code to compute CCNET perplexity, parallelizing on parquet files, is available here.
    • Filtering for v1.2: Using OCR scores provided in the metadata of the source corpus, documents with an OCR score of less than 90 out of 100 were filtered out.

Gutenberg

  • Source: Corpus compiled by OpenLLM partners.
  • Extracted from:
  • Description: A collection of free eBooks, manually prepared by human annotators.
  • Pre-processing:
    • Filtering: The dataset was filtered based on the author date of death, so that only texts from authors who died more than 70 years ago are included (80 years for French authors). See code details here. This filtering was done to ensure that the texts are in the public domain.
    • Text cleaning: Headers and footers containing information about Project Gutenberg were removed (see code details).

HAL

  • Source: bigscience-data/roots_fr_hal_archives_ouvertes. License: Roots dataset.
  • Extracted from: HAL (Open access).
  • Description: A collection of scientific papers and manuscripts distributed through the open science platform HAL. Dataset containing text retrieved through OCR.
  • Pre-processing:
    • Filtering: To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 930, measured using a CCNET model in French (see code details). The code to compute CCNET perplexity, parallelizing on parquet files, is available here.
  • Citation: Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Alexandra Luccioni, Yacine Jernite (2022). "The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset," Advances in Neural Information Processing Systems (NeurIPS), 35, 31809-31826.

InterventionsParlement

  • Source: Corpus contributed by OpenLLM partners.
  • Extracted from: Regards citoyens. License: CC BY-SA.
  • Description: Transcripts of remarks made during French parlementary debates. Each text contains a continuous remark by a single speaker.

LEGI

  • Source: Corpus contributed by OpenLLM partners. A version is also published here: Nicolas-BZRD/DILA_OPENDATA_FR_2023.
  • Extracted from: OpenData (Data collection date: October, 2023).
  • Description: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the dataset card).

MathPile (Commercial)

  • Source: GAIR/MathPile_Commercial. License: CC BY-SA 4.0.
  • Extracted from: MathPile. License: CC BY-SA-NC 4.0.
  • Description: A preprocessed collection of documents focused on math, including Textbooks, arXiv, Wikipedia, ProofWiki, StackExchange, and web pages from Common Crawl. The content targets a range of levels, from kindergarten through postgraduate level. MathPile_Commercial was obtained by removing documents from MathPile that do not allow commercial use.
  • Pre-processing:
    • Formatting: Converted the content of StackExchange questions and answers to match the {"text": value} format, using the following formula:
    text = sample["question"]["Body"] + "\n\n".join([answer["Body"] for answer in sample["answers"]])
    
  • Citation: Zengzhi Wang, Rui Xia and Pengfei Liu (2023). "Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math," arXiv:2312.17120.

OpenData

  • Source: Nicolas-BZRD/DILA_OPENDATA_FR_2023 (balo, dole, inca, kali, and sarde subsets). License: ODC-BY.
  • Extracted from: OpenData (Data collection date: October, 2023).
  • Description: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the dataset card).

OpenEdition

  • Source: Corpus contributed by OpenLLM partners.
  • Extracted from: Open Edition. License: Open Edition Books.
  • Description: A collection of scientific books, journal articles, blog entries and event descriptions.

PeS2o (v2)

  • Source: allenai/peS2o version v2. License: ODC BY-v1.0.
  • Extracted from: S2ORC (see aclanthology). License: ODC BY-v1.0.
  • Description: A preprocessed collection of academic papers designed for pre-training of language models. PeS2o is composed of two subsets: one containing full papers and one containing only paper titles and abstracts. Dataset containing (some) text retrieved through OCR. Knowledge cutoff: 2023-01-03.
  • Citation: Luca Soldaini and Kyle Lo (2023). "peS2o (Pretraining Efficiently on S2ORC) Dataset," Allen Institute for AI. GitHub.

Pile (Uncopyrighted)

  • Source: monology/pile-uncopyrighted. License: Other.
  • Extracted from: FreeLaw, StackExchange, USPTO Backgrounds, DM Mathematics, Ubuntu IRC, PhilPapers, NIH ExPorter from The Pile. License: MIT.
  • Description (from the Datasheet):
    • FreeLaw: "The Free Law Project is US registered non-profit that provide access to millions of legal opinions and analytical tools for academic studies in the legal realm."
    • StackExchange: "The StackExchange dataset is a dump of anonymized user-contributed content on the Stack Exchange network, a popular collection of websites centered around user-contributed questions and answers."
    • USPTO Backgrounds: "The USPTO Backgrounds dataset is a set of background sections from patents granted by the United States Patent and Trademark Office, derived from its published bulk archives."
    • DM Mathematics: "The DeepMind Mathematics dataset consists of a collection of mathematical problems such as algebra, arithmetic, calculus, number theory, and probability, formatted as natural language prompts Saxton et al., 2019."
    • Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available chatlogs of all Ubunturelated channels on the Freenode IRC chat server."
    • PhilPapers: a dataset of open access philosophy publications from an international database maintained by the Center for Digital Philosophy at the University of Western Ontario.
    • NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
  • Pre-processing (v1.2 only):
    • Filtering of PhilPapers: Papers were removed if their language, detected using Stanza, was not classified as English, French, German, Spanish or Italian.
    • Filtering and text cleaning of Ubuntu IRC: Texts from some channels were excluded to avoid data from languages other than English, French, German, Spanish or Italian and certain encoding errors were fixed (see code details here).
  • Citations:
    • Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," arXiv:2101.00027.
    • Stella Biderman, Kieran Bicheno, Leo Gao (2022). "Datasheet for the Pile," arXiv:2201.07311.

QuestionsEcritesParlement

  • Source: Corpus contributed by OpenLLM partners.
  • Extracted from: Regards citoyens. License: CC BY-SA.
  • Description: Collection of long written questions, read during a session at the French National Assembly. Questions are asked by a member of the French parliament and addressed to a minister (who is given two months to respond).

RedPajama (v2)

  • Source: togethercomputer/RedPajama-Data-V2. License: Apache 2.0 (data preparation code), Not specified (data) but see Common Crawl terms of use.

  • Extracted from: Common Crawl.

  • Description: "RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the CCNet pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated" (from GitHub). Most recent crawl for French data in the Lucie Training Dataset v1.1: 2023-14. (For more details on the time periods covered by crawls in this dataset see the composition details for French, German, Italian and Spanish.)

  • Pre-processing and deduplication:

    • Url filtering:
      • Removing duplicate urls: urls were removed if their base domain overlapped with a dataset already in the Lucie Training Dataset (e.g., "theses.fr") in order to increase diversity of content (see code details).
      • Filtering certain toxic content: urls from a list of blacklisted content were removed (see code details).
      • Filtering by robots.txt files: we collect robots.txt and remove all documents for which CCBot is disallowed or for which we failed to collect information as of July 2024 in an effort to select data free from opt-out evidence according to the 4th article of the copyright European directive (2019).
    • Filtering: A series of filters were applied using quality signals already available in the dataset. This includes (see code details):
      • CCnet perplexity below 10 or above 1000
      • C4 filtering (including removal of documents that contain toxic words)
      • Gopher filtering and repetition removal
      • Redpajama document deduplication
    • Removal of personally identifying information (PII): email addresses and ip addresses were replaced with random addresses (see code details).
    • MinHash deduplication was performed on each snapshot and language independantly as proposed in FineWeb. For minhash configuration see code details.

    The Datatrove library was used to perform both filtering and deduplication stages.

  • Citation: Together Computer (2023). "RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models," GitHub.

STAC

  • Source: STAC. License: CC BY-SA-NC 4.0.
  • Description: A collection of multiparty chats from an online version of the game Settlers of Catan. The full STAC corpus contains annotations for discourse structure. We use only the text of the chats.
  • Citation: Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara and Stergos Afantenos (2016). "Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus," The Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, pp. 2721-2727.

TheStack (v1.2)

  • Source: bigcode/the-stack-dedup. License: Other (mixture of copyleft licenses).
  • Extracted from: GitHub via GHarchive. Mixed licenses for source.
  • Description: "The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the BigCode Project, an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. This is the near-deduplicated version with 3TB data" (from the dataset card).
  • Citation: Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra and Harm de Vries (2022). "The Stack: 3 TB of permissively licensed source code," arxiv:2211.15533.

Theses

  • Source: Corpus contributed by OpenLLM partners.
  • Extracted from: theses.fr (License: Licence Ouverte / Open Licence version 2.0) and HAL (Open access).
  • Description: A collection of doctoral theses published in France. Dataset containing text retrieved through OCR.
  • Pre-processing:
    • Text cleaning:
      • Title pages about HAL, pages containing a significant fraction of control characters, and duplicate lines were removed (see code details).
      • Because the results of OCR on tables and graphics can give rise to garbage text, the text was cleaned by removing the most suspicious chunks. In particular, a chunk was removed if it was not detected as being written in French, English, Spanish, German or Italian, or if the perplexity of a CCNet Language Model on the chunk was higher than 2000 (see code details). The code to compute CCNET perplexity, parallelizing on parquet files, is available here.
    • Filtering: Texts with fewer than 1000 words or 10000 characters were removed (see code details).

Wikipedia, Wikisource, Wiktionary

YouTube

  • Source: Corpus contributed by LINAGORA Labs and LeVoiceLab.
  • Extracted from: YouTube.
  • Description: French subtitles from videos published with permissive licenses on YouTube.
  • Extraction pipeline description:
    • Searching for YouTube videos likely in French: Based on searches generated automatically from random sequences of words extracted from a corpus of French journalistic articles (initially obtained through a web-crawling tool applied to publicly accessible news and media sites such as Huffington Post, 20 Minutes, Le Parisien, Actu, Numerama, Slate, etc.).
      Selection of videos with subtitles labeled as "French," excluding those marked as "automatically generated."
      At this stage: 52,778 videos selected, corresponding to 10,654 hours of audio.
    • Selection of videos whose subtitle language classification confirms French with a certain confidence index:
      At this stage: 51,934 videos selected, corresponding to 10,425 hours of audio.
    • Selection of videos whose subtitles contain uppercase, lowercase, and punctuation marks:
      This step filters out automatically generated subtitles created with speech recognition tools.
      At this stage: 45,488 videos selected, corresponding to 8,904 hours of audio.
    • Extraction of audio tracks from the selected videos.
    • Automatic formatting of transcripts obtained from subtitles: Removal of emojis, sound event annotations in brackets (like "[Music]") and extra text such as "subtitled by XXX." (on last seconds of the video).
    • Selection of videos where an automatic speech recognition tool correctly transcribes the first 30 seconds with a minimum recall and precision rate:
      At this stage: 37,513 videos selected, corresponding to 7,541 hours of audio.
    • Realignment of the transcript: Ensuring accurate timestamps in the transcriptions based on the subtitles and excluding audios where alignment fails.
      At this stage: 36,618 videos selected, corresponding to 6,729 hours of audio.

Example use in Python

Load the dataset

Load and iterate over the full dataset using the datasets library:

from datasets import load_dataset

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", split="train", streaming=True)

for sample in dataset:
   
   text = sample["text"]

   # … do something with the text

Iterate over a subset

Several configurations are available to select a language, a source, or both, illustrated in the following examples.

The list of possible configurations can be obtained programmatically:

from datasets import load_dataset_builder

config_names = list(load_dataset_builder("OpenLLM-France/Lucie-Training-Dataset").builder_configs)

print(config_names)
['default', 'en', 'fr', 'de', 'es', 'it', 'de,fr', 'es,en', 'fr,en', 'it,en', 'natural', 'code', 'code-assembly', 'code-c', 'code-c#', 'code-c++', 'code-clojure', 'code-dart', 'code-elixir', 'code-erlang', 'code-fortran', 'code-go', 'code-haskell', 'code-java', 'code-javascript', 'code-julia', 'code-kotlin', 'code-lua', 'code-mathematica', 'code-matlab', 'code-ocaml', 'code-perl', 'code-php', 'code-python', 'code-r', 'code-racket', 'code-ruby', 'code-rust', 'code-scala', 'code-swift', 'code-tex', 'code-typescript', 'AmendementsParlement', 'AmericanStories', 'Claire', 'Claire-en', 'Claire-fr', 'CroissantAligned', 'DiscoursPublics', 'Europarl', 'Europarl-de', 'Europarl-en', 'Europarl-es', 'Europarl-fr', 'EuroparlAligned', 'EuroparlAligned-de,fr', 'EuroparlAligned-es,en', 'EuroparlAligned-fr,en', 'EuroparlAligned-it,en', 'Eurovoc', 'Eurovoc-de', 'Eurovoc-en', 'Eurovoc-es', 'Eurovoc-it', 'FineWebEdu', 'GallicaMonographies', 'GallicaPress', 'Gutenberg', 'Gutenberg-de', 'Gutenberg-en', 'Gutenberg-es', 'Gutenberg-fr', 'Gutenberg-it', 'HAL', 'InterventionsParlement', 'LEGI', 'MathPile', 'OpenData', 'OpenEdition', 'PeS2o', 'PeS2o-s2ag', 'PeS2o-s2orc', 'Pile', 'Pile-DM_Mathematics', 'Pile-FreeLaw', 'Pile-NIH_ExPorter', 'Pile-PhilPapers', 'Pile-StackExchange', 'Pile-USPTO_Backgrounds', 'Pile-Ubuntu_IRC', 'QuestionsEcritesParlement', 'RedPajama', 'RedPajama-de', 'RedPajama-es', 'RedPajama-fr', 'RedPajama-it', 'Stac', 'TheStack', 'Theses', 'Wikipedia', 'Wikipedia-de', 'Wikipedia-en', 'Wikipedia-es', 'Wikipedia-fr', 'Wikipedia-it', 'Wikisource', 'Wiktionary', 'YouTube']

Below are some examples of how to load data from different sources and in different languages.

Load data in French:

from datasets import load_dataset

kwargs = dict(split="train", streaming=True)

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)

Load data where French and English are aligned:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)

Load data corresponding to files with programming languages:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)

Load data in Python:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code-python", **kwargs)

Load data from Wikipedia (in all available languages):

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)

Load data from French pages of Wikipedia (wikipedia.fr):

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)

Load the Pile dataset:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Pile", **kwargs)

Load the subset "PhilPapers" from the Pile dataset:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Pile-PhilPapers", **kwargs)

Load a specific version

You can load a specific version with the datasets Python package using the revision parameter of load_dataset(…):

from datasets import load_dataset

kwargs = dict(split="train", streaming=True)

name = None # or a configuration (e.g. "fr", "code-python", "Wikipedia-fr", "Pile-PhilPapers")

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", name, revision="v1.2", **kwargs)

Citation

When using the Lucie Training Dataset, please cite the following paper:

✍ Olivier Gouvert, Julie Hunter, Jérôme Louradour, Christophe Cérisara, Evan Dufraisse, Yaya Sy, Laura Rivière, Jean-Pierre Lorré (2025). The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation. arxiv:2503.12294.

@misc{openllm2025lucie,
      title={The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation}, 
      author={Olivier Gouvert and Julie Hunter and Jérôme Louradour and Christophe Cerisara and Evan Dufraisse and Yaya Sy and Laura Rivière and Jean-Pierre Lorré and OpenLLM-France community},
      year={2025},
      eprint={2503.12294},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.12294}, 
}

Acknowledgements

The Lucie Training Dataset was created by members of LINAGORA (Olivier Gouvert, Julie Hunter, Jérôme Louradour, Jean-Pierre Lorré) and the OpenLLM-France community.

We thank in particular Rachel Bawden (INRIA), Clément Bénesse (Opsci), Christophe Cérisara (LORIA), Evan Dufraisse (CEA List), Olivier Ferret (CEA List), Joöl Gombin (Opsci), Ismaïl Harrando (LINAGORA), Jordan Ricker (Opsci), Guokan Shang (MBZUAI), and Yaya Sy (LORIA) for their helpful input.

Data storage and significant parts of the data processing were made possible through the HPC resources from GENCI–IDRIS (Grant 2024-GC011015444).

Contact

contact@openllm-france.fr
Downloads last month
3,895

Models trained or fine-tuned on OpenLLM-France/Lucie-Training-Dataset

Collections including OpenLLM-France/Lucie-Training-Dataset

Papers for OpenLLM-France/Lucie-Training-Dataset