Datasets:
Search is not available for this dataset
id int64 0 6.41k | repo_name stringlengths 2 91 | repo_owner stringlengths 2 39 | file_link stringlengths 84 311 | line_link stringlengths 91 317 | path stringlengths 8 227 | content_sha stringlengths 64 64 | content stringlengths 1.11k 29.2M |
|---|---|---|---|---|---|---|---|
0 | finance-complaint | Machine-Learning-01 | https://github.com/Machine-Learning-01/finance-complaint/blob/9b207785ca1d12ce2ba2a8acf8141c5f00055d1d/notebook/Untitled1.ipynb | https://github.com/Machine-Learning-01/finance-complaint/blob/9b207785ca1d12ce2ba2a8acf8141c5f00055d1d/notebook/Untitled1.ipynb#L608 | notebook/Untitled1.ipynb | d12c58483c42f93f58d6943065e34ed0a636d6a5ae1732b81b68dd82ddce4c2c | {
"cells": [
{
"cell_type": "code",
"execution_count": 4,
"id": "f5fe9aa4-23f3-4a32-a4c8-7c25106e8736",
"metadata": {
"canvas": {
"comments": [],
"componentType": "CodeCell",
"copiedOriginId": null,
"diskcache": false,
"headerColor": "none",
"id": "c76bcc2e-bc22-4c56-b6e... |
1 | langchain | langchain-ai | https://github.com/langchain-ai/langchain/blob/9b24f0b067d9f4a5f3e1f53fe3f7342f79a1f010/docs/extras/modules/model_io/output_parsers/enum.ipynb | https://github.com/langchain-ai/langchain/blob/9b24f0b067d9f4a5f3e1f53fe3f7342f79a1f010/docs/extras/modules/model_io/output_parsers/enum.ipynb#L125 | docs/extras/modules/model_io/output_parsers/enum.ipynb | e515f22c581952d6cb0b36104d398722c5186e06e301b448cd42cd5f1c7e987d | {
"cells": [
{
"cell_type": "markdown",
"id": "0360be02",
"metadata": {},
"source": [
"# Enum parser\n",
"\n",
"This notebook shows how to use an Enum output parser"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2f039b4b",
"metadata": {},
"outputs": [],
"so... |
2 | deep_prediction | sapan-ostic | https://github.com/sapan-ostic/deep_prediction/blob/e4709e4a66477755e6afe39849597ae1e3e969b5/scripts/.ipynb_checkpoints/test_argo-checkpoint.ipynb | https://github.com/sapan-ostic/deep_prediction/blob/e4709e4a66477755e6afe39849597ae1e3e969b5/scripts/.ipynb_checkpoints/test_argo-checkpoint.ipynb#L468 | scripts/.ipynb_checkpoints/test_argo-checkpoint.ipynb | 7736c22796f980a4998a16ec0eb26d703d829be1d0c2abd660e19494c8dd05aa | {
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import argparse\n",
"import gc\n",
"import logging\n",
"import os\n",
"import sys\n",
"import time\n",
"\n",
"from collections import defaultdict\n",
"\n",
"imp... |
3 | cv-ferattn-code | HelenGuohx | "https://github.com/HelenGuohx/cv-ferattn-code/blob/faa9b7850fe2a0f8c08193bb129b5fec4639d616/fervide(...TRUNCATED) | "https://github.com/HelenGuohx/cv-ferattn-code/blob/faa9b7850fe2a0f8c08193bb129b5fec4639d616/fervide(...TRUNCATED) | fervideo/Facial_recognition.ipynb | 881e69a1e530676b4a28e425af897c09e8ebcc8037fc460a4aa7d5f4cc63e44f | "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"colab_type\": \"t(...TRUNCATED) |
4 | diseno_sci_sfw | leliel12 | "https://github.com/leliel12/diseno_sci_sfw/blob/6a616096780dfeb320b8537d3597a842d34ee1a0/00_anteced(...TRUNCATED) | "https://github.com/leliel12/diseno_sci_sfw/blob/6a616096780dfeb320b8537d3597a842d34ee1a0/00_anteced(...TRUNCATED) | 00_antecedentes/02_niveles_de_abstraccion.ipynb | b0c26856e090641929400716e6906670c5fde357f3d5609e06f6565c3328c1c7 | "{\n \"cells\": [\n {\n \"attachments\": {\n \"image-2.png\": {\n \"image/png\": \"iVBORw0(...TRUNCATED) |
5 | Elements-of-Data-Analytics | kiat | "https://github.com/kiat/Elements-of-Data-Analytics/blob/0739359d399816477059d8585a0b65b8eb342dc0/Co(...TRUNCATED) | "https://github.com/kiat/Elements-of-Data-Analytics/blob/0739359d399816477059d8585a0b65b8eb342dc0/Co(...TRUNCATED) | Code-Example-040.ipynb | 665242fa589d505f6755b8be2dc6f1e129d92e4b7049db74eeb95d66924a6a53 | "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n (...TRUNCATED) |
6 | gsn-projekt | jkoscialkowski | "https://github.com/jkoscialkowski/gsn-projekt/blob/947e1ff4215988fa68360b11df755661aea228a1/tests/t(...TRUNCATED) | "https://github.com/jkoscialkowski/gsn-projekt/blob/947e1ff4215988fa68360b11df755661aea228a1/tests/t(...TRUNCATED) | tests/test_notebook.ipynb | 6ae89a622893acab2fa5954657ee3e3733f1c6e3c3e30a5dcc95cc72e0f4adc4 | "{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 76,\n \"metadata\": {(...TRUNCATED) |
7 | cc2018 | SocratesAcademy | "https://github.com/SocratesAcademy/cc2018/blob/f3bac9d357b80ca09dc4b6fb7d92764a4708e4ce/PythonDataS(...TRUNCATED) | "https://github.com/SocratesAcademy/cc2018/blob/f3bac9d357b80ca09dc4b6fb7d92764a4708e4ce/PythonDataS(...TRUNCATED) | PythonDataScience/notebooks/04.14-Visualization-With-Seaborn.ipynb | 4db555e3497a61e044ccd47687ff19a1ce6a7be3375036b4f4d8d95488e6c08a | "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"slideshow\": {\n (...TRUNCATED) |
8 | diseno_sci_sfw | leliel12 | "https://github.com/leliel12/diseno_sci_sfw/blob/6a616096780dfeb320b8537d3597a842d34ee1a0/01_paradig(...TRUNCATED) | "https://github.com/leliel12/diseno_sci_sfw/blob/6a616096780dfeb320b8537d3597a842d34ee1a0/01_paradig(...TRUNCATED) | 01_paradigmas/01_python.ipynb | f6915a32a41c0bf57d49daa049e08ee301132dbfe7ad9dba21980f18e9a8e88f | "{\n \"cells\": [\n {\n \"attachments\": {\n \"image.png\": {\n \"image/png\": \"iVBORw0KG(...TRUNCATED) |
9 | AutoCog | LLNL | https://github.com/LLNL/AutoCog/blob/44a58c9338403a0f815f530f00a28b06b5d90469/share/fta.ipynb | https://github.com/LLNL/AutoCog/blob/44a58c9338403a0f815f530f00a28b06b5d90469/share/fta.ipynb#L50 | share/fta.ipynb | aeb6090a6980a4c539e2951b0139a34c4b66e6c21c0cd43ac0d7bf095cf4f0fe | "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"e9961830-c0f6-4d46-a8ee-ab6ea6(...TRUNCATED) |
End of preview. Expand in Data Studio
Dataset Summary
The presented dataset contains 10000 Jupyter notebooks,
each of which contains at least one error. In addition to the notebook content,
the dataset also provides information about the repository where the notebook is stored.
This information can help restore the environment if needed.
Getting Started
This dataset is organized such that it can be naively loaded via the Hugging Face datasets library. We recommend using streaming due to the large size of the files.
import nbformat
from datasets import load_dataset
dataset = load_dataset(
"JetBrains-Research/jupyter-errors-dataset", split="test", streaming=True
)
row = next(iter(dataset))
notebook = nbformat.reads(row["content"], as_version=nbformat.NO_CONVERT)
Citation
@misc{JupyterErrorsDataset,
title = {Dataset of Errors in Jupyter Notebooks},
author = {Konstantin Grotov and Sergey Titov and Yaroslav Zharov and Timofey Bryksin},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/JetBrains-Research/jupyter-errors-dataset}},
}
- Downloads last month
- 43