repo_id stringlengths 15 89 | file_path stringlengths 27 180 | content stringlengths 1 2.23M | __index_level_0__ int64 0 0 |
|---|---|---|---|
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/sacrebleu/sacrebleu.py | # Copyright 2020 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/glue/README.md | # Metric Card for GLUE
## Metric description
This metric is used to compute the GLUE evaluation metric associated to each [GLUE dataset](https://huggingface.co/datasets/glue).
GLUE, the General Language Understanding Evaluation benchmark is a collection of resources for training, evaluating, and analyzing natural la... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/glue/glue.py | # Copyright 2020 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/cer/README.md | # Metric Card for CER
## Metric description
Character error rate (CER) is a common metric of the performance of an automatic speech recognition (ASR) system. CER is similar to Word Error Rate (WER), but operates on character instead of word.
Character error rate can be computed as:
`CER = (S + D + I) / N = (S + D... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/cer/test_cer.py | # Copyright 2021 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/cer/cer.py | # Copyright 2021 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/recall/README.md | # Metric Card for Recall
## Metric Description
Recall is the fraction of the positive examples that were correctly labeled by the model as positive. It can be computed with the equation:
Recall = TP / (TP + FN)
Where TP is the number of true positives and FN is the number of false negatives.
## How to Use
At mini... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/recall/recall.py | # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/precision/README.md | # Metric Card for Precision
## Metric Description
Precision is the fraction of correctly labeled positive examples out of all of the examples that were labeled as positive. It is computed via the equation:
Precision = TP / (TP + FP)
where TP is the True positives (i.e. the examples correctly labeled as positive) and... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/precision/precision.py | # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/matthews_correlation/matthews_correlation.py | # Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/matthews_correlation/README.md | # Metric Card for Matthews Correlation Coefficient
## Metric Description
The Matthews correlation coefficient is used in machine learning as a
measure of the quality of binary and multiclass classifications. It takes
into account true and false positives and negatives and is generally
regarded as a balanced measure wh... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/mauve/README.md | # Metric Card for MAUVE
## Metric description
MAUVE is a library built on PyTorch and HuggingFace Transformers to measure the gap between neural text and human text with the eponymous MAUVE measure. It summarizes both Type I and Type II errors measured softly using [Kullback–Leibler (KL) divergences](https://en.wikip... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/mauve/mauve.py | # coding=utf-8
# Copyright 2020 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by app... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/exact_match/exact_match.py | # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/exact_match/README.md | # Metric Card for Exact Match
## Metric Description
A given predicted string's exact match score is 1 if it is the exact same as its reference string, and is 0 otherwise.
- **Example 1**: The exact match score of prediction "Happy Birthday!" is 0, given its reference is "Happy New Year!".
- **Example 2**: The exact ... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/chrf/README.md | # Metric Card for chrF(++)
## Metric Description
ChrF and ChrF++ are two MT evaluation metrics that use the F-score statistic for character n-gram matches. ChrF++ additionally includes word n-grams, which correlate more strongly with direct assessment. We use the implementation that is already present in sacrebleu.
... | 0 |
hf_public_repos/datasets/metrics | hf_public_repos/datasets/metrics/chrf/chrf.py | # Copyright 2021 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/utils/release.py | # Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicabl... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_info_utils.py | import pytest
import datasets.config
from datasets.utils.info_utils import is_small_dataset
@pytest.mark.parametrize("dataset_size", [None, 400 * 2**20, 600 * 2**20])
@pytest.mark.parametrize("input_in_memory_max_size", ["default", 0, 100 * 2**20, 900 * 2**20])
def test_is_small_dataset(dataset_size, input_in_memory... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_table.py | import copy
import pickle
import warnings
from typing import List, Union
import numpy as np
import pyarrow as pa
import pytest
import datasets
from datasets import Sequence, Value
from datasets.features.features import Array2D, Array2DExtensionType, ClassLabel, Features, Image
from datasets.table import (
Concate... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_hf_gcp.py | import os
from tempfile import TemporaryDirectory
from unittest import TestCase
import pytest
from absl.testing import parameterized
from datasets import config
from datasets.arrow_reader import HF_GCP_BASE_URL
from datasets.builder import DatasetBuilder
from datasets.dataset_dict import IterableDatasetDict
from data... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_load.py | import importlib
import os
import pickle
import shutil
import tempfile
import time
from hashlib import sha256
from multiprocessing import Pool
from pathlib import Path
from unittest import TestCase
from unittest.mock import patch
import dill
import pyarrow as pa
import pytest
import requests
import datasets
from data... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_metadata_util.py | import re
import sys
import tempfile
import unittest
from pathlib import Path
import pytest
import yaml
from huggingface_hub import DatasetCard, DatasetCardData
from datasets.config import METADATA_CONFIGS_FIELD
from datasets.info import DatasetInfo
from datasets.utils.metadata import MetadataConfigs
def _dedent(st... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_filesystem.py | import importlib
import os
import fsspec
import pytest
from fsspec import register_implementation
from fsspec.registry import _registry as _fsspec_registry
from datasets.filesystems import COMPRESSION_FILESYSTEMS, extract_path_from_uri, is_remote_filesystem
from .utils import require_lz4, require_zstandard
def tes... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_iterable_dataset.py | import pickle
from copy import deepcopy
from itertools import chain, islice
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.compute as pc
import pytest
from datasets import Dataset, load_dataset
from datasets.combine import concatenate_datasets, interleave_datasets
from datasets.features im... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_experimental.py | import unittest
import warnings
from datasets.utils import experimental
@experimental
def dummy_function():
return "success"
class TestExperimentalFlag(unittest.TestCase):
def test_experimental_warning(self):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always"... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_inspect.py | import os
from pathlib import Path
import pytest
from datasets.inspect import (
get_dataset_config_info,
get_dataset_config_names,
get_dataset_default_config_name,
get_dataset_infos,
get_dataset_split_names,
inspect_dataset,
inspect_metric,
)
from datasets.packaged_modules.csv import csv
... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_file_utils.py | import os
from pathlib import Path
from unittest.mock import patch
import pytest
import zstandard as zstd
from datasets.download.download_config import DownloadConfig
from datasets.utils.file_utils import (
OfflineModeIsEnabled,
cached_path,
fsspec_get,
fsspec_head,
ftp_get,
ftp_head,
get_... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_data_files.py | import copy
import os
from pathlib import Path
from typing import List
from unittest.mock import patch
import fsspec
import pytest
from fsspec.registry import _registry as _fsspec_registry
from fsspec.spec import AbstractFileSystem
from datasets.data_files import (
DataFilesDict,
DataFilesList,
DataFilesP... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_splits.py | import pytest
from datasets.splits import SplitDict, SplitInfo
from datasets.utils.py_utils import asdict
@pytest.mark.parametrize(
"split_dict",
[
SplitDict(),
SplitDict({"train": SplitInfo(name="train", num_bytes=1337, num_examples=42, dataset_name="my_dataset")}),
SplitDict({"train... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_download_manager.py | import json
import os
from pathlib import Path
import pytest
from datasets.download.download_config import DownloadConfig
from datasets.download.download_manager import DownloadManager
from datasets.utils.file_utils import hash_url_to_filename
URL = "http://www.mocksite.com/file1.txt"
CONTENT = '"text": ["foo", "fo... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_builder.py | import importlib
import os
import tempfile
import types
from contextlib import nullcontext as does_not_raise
from multiprocessing import Process
from pathlib import Path
from unittest import TestCase
from unittest.mock import patch
import numpy as np
import pyarrow as pa
import pyarrow.parquet as pq
import pytest
from... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_arrow_dataset.py | import contextlib
import copy
import itertools
import json
import os
import pickle
import re
import sys
import tempfile
from functools import partial
from pathlib import Path
from unittest import TestCase
from unittest.mock import MagicMock, patch
import numpy as np
import numpy.testing as npt
import pandas as pd
impo... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_arrow_writer.py | import copy
import os
import tempfile
from unittest import TestCase
from unittest.mock import patch
import numpy as np
import pyarrow as pa
import pyarrow.parquet as pq
import pytest
from datasets.arrow_writer import ArrowWriter, OptimizedTypedSequence, ParquetWriter, TypedSequence
from datasets.features import Array... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/README.md | ## Add Dummy data test
**Important** In order to pass the `load_dataset_<dataset_name>` test, dummy data is required for all possible config names.
First we distinguish between datasets scripts that
- A) have no config class and
- B) have a config class
For A) the dummy data folder structure, will always look as fol... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_warnings.py | import pytest
from datasets import inspect_metric, list_metrics, load_metric
@pytest.fixture
def mock_emitted_deprecation_warnings(monkeypatch):
monkeypatch.setattr("datasets.utils.deprecation_utils._emitted_deprecation_warnings", set())
# Used by list_metrics
@pytest.fixture
def mock_hfh(monkeypatch):
cla... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_dataset_list.py | from unittest import TestCase
from datasets import Sequence, Value
from datasets.arrow_dataset import Dataset
class DatasetListTest(TestCase):
def _create_example_records(self):
return [
{"col_1": 3, "col_2": "a"},
{"col_1": 2, "col_2": "b"},
{"col_1": 1, "col_2": "c"}... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_version.py | import pytest
from datasets.utils.version import Version
@pytest.mark.parametrize(
"other, expected_equality",
[
(Version("1.0.0"), True),
("1.0.0", True),
(Version("2.0.0"), False),
("2.0.0", False),
("1", False),
("a", False),
(1, False),
(Non... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_tasks.py | from copy import deepcopy
from unittest.case import TestCase
import pytest
from datasets.arrow_dataset import Dataset
from datasets.features import Audio, ClassLabel, Features, Image, Sequence, Value
from datasets.info import DatasetInfo
from datasets.tasks import (
AudioClassification,
AutomaticSpeechRecogni... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_upstream_hub.py | import fnmatch
import gc
import os
import shutil
import tempfile
import textwrap
import time
import unittest
from io import BytesIO
from pathlib import Path
from unittest.mock import patch
import numpy as np
import pytest
from huggingface_hub import DatasetCard, HfApi
from huggingface_hub.utils import RepositoryNotFou... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_beam.py | import os
import tempfile
from functools import partial
from unittest import TestCase
from unittest.mock import patch
import datasets
import datasets.config
from .utils import require_beam
class DummyBeamDataset(datasets.BeamBasedBuilder):
"""Dummy beam dataset."""
def _info(self):
return datasets.... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_py_utils.py | import time
from dataclasses import dataclass
from multiprocessing import Pool
from unittest import TestCase
from unittest.mock import patch
import multiprocess
import numpy as np
import pytest
from datasets.utils.py_utils import (
NestedDataStructure,
asdict,
iflatmap_unordered,
map_nested,
temp_... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_dataset_dict.py | import os
import tempfile
from unittest import TestCase
import numpy as np
import pandas as pd
import pytest
from datasets import load_from_disk
from datasets.arrow_dataset import Dataset
from datasets.dataset_dict import DatasetDict, IterableDatasetDict
from datasets.features import ClassLabel, Features, Sequence, V... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_extract.py | import os
import zipfile
import pytest
from datasets.utils.extract import (
Bzip2Extractor,
Extractor,
GzipExtractor,
Lz4Extractor,
SevenZipExtractor,
TarExtractor,
XzExtractor,
ZipExtractor,
ZstdExtractor,
)
from .utils import require_lz4, require_py7zr, require_zstandard
@pyte... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_streaming_download_manager.py | import json
import os
import re
from pathlib import Path
import pytest
from fsspec.registry import _registry as _fsspec_registry
from fsspec.spec import AbstractBufferedFile, AbstractFileSystem
from datasets.download.download_config import DownloadConfig
from datasets.download.streaming_download_manager import (
... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_offline_util.py | import pytest
import requests
from datasets.utils.file_utils import http_head
from .utils import OfflineSimulationMode, RequestWouldHangIndefinitelyError, offline
@pytest.mark.integration
def test_offline_with_timeout():
with offline(OfflineSimulationMode.CONNECTION_TIMES_OUT):
with pytest.raises(Reques... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_metric_common.py | # Copyright 2020 HuggingFace Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writ... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/utils.py | import asyncio
import importlib.metadata
import os
import re
import sys
import tempfile
import unittest
from contextlib import contextmanager
from copy import deepcopy
from distutils.util import strtobool
from enum import Enum
from importlib.util import find_spec
from pathlib import Path
from unittest.mock import patch... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_parallel.py | import pytest
from datasets.parallel import ParallelBackendConfig, parallel_backend
from datasets.utils.py_utils import map_nested
from .utils import require_dill_gt_0_3_2, require_joblibspark, require_not_windows
def add_one(i): # picklable for multiprocessing
return i + 1
@require_dill_gt_0_3_2
@require_jo... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_arrow_reader.py | import os
import tempfile
from pathlib import Path
from unittest import TestCase
import pyarrow as pa
import pytest
from datasets.arrow_dataset import Dataset
from datasets.arrow_reader import ArrowReader, BaseReader, FileInstructions, ReadInstruction, make_file_instructions
from datasets.info import DatasetInfo
from... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_fingerprint.py | import json
import os
import pickle
import subprocess
from functools import partial
from pathlib import Path
from tempfile import gettempdir
from textwrap import dedent
from types import FunctionType
from unittest import TestCase
from unittest.mock import patch
import numpy as np
import pytest
from multiprocess import... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_distributed.py | import os
import sys
from pathlib import Path
import pytest
from datasets import Dataset, IterableDataset
from datasets.distributed import split_dataset_by_node
from .utils import execute_subprocess_async, get_torch_dist_unique_port, require_torch
def test_split_dataset_by_node_map_style():
full_ds = Dataset.f... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_sharding_utils.py | import pytest
from datasets.utils.sharding import _distribute_shards, _number_of_shards_in_gen_kwargs, _split_gen_kwargs
@pytest.mark.parametrize(
"kwargs, expected",
[
({"num_shards": 0, "max_num_jobs": 1}, []),
({"num_shards": 10, "max_num_jobs": 1}, [range(10)]),
({"num_shards": 10... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_tqdm.py | import unittest
from unittest.mock import patch
import pytest
from pytest import CaptureFixture
from datasets.utils import (
are_progress_bars_disabled,
disable_progress_bars,
enable_progress_bars,
tqdm,
)
class TestTqdmUtils(unittest.TestCase):
@pytest.fixture(autouse=True)
def capsys(self,... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_metric.py | import os
import pickle
import tempfile
import time
from multiprocessing import Pool
from unittest import TestCase
import pytest
from datasets.features import Features, Sequence, Value
from datasets.metric import Metric, MetricInfo
from .utils import require_tf, require_torch
class DummyMetric(Metric):
def _in... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_search.py | import os
import tempfile
from functools import partial
from unittest import TestCase
from unittest.mock import patch
import numpy as np
import pytest
from datasets.arrow_dataset import Dataset
from datasets.search import ElasticSearchIndex, FaissIndex, MissingIndex
from .utils import require_elasticsearch, require_... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_formatting.py | import datetime
from pathlib import Path
from unittest import TestCase
import numpy as np
import pandas as pd
import pyarrow as pa
import pytest
from datasets import Audio, Features, Image, IterableDataset
from datasets.formatting import NumpyFormatter, PandasFormatter, PythonFormatter, query_table
from datasets.form... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/_test_patching.py | # isort: skip_file
# This is the module that test_patching.py uses to test patch_submodule()
import os # noqa: F401 - this is just for tests
import os as renamed_os # noqa: F401 - this is just for tests
from os import path # noqa: F401 - this is just for tests
from os import path as renamed_path # noqa: F401 - th... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_hub.py | from urllib.parse import quote
import pytest
from datasets.utils.hub import hf_hub_url
@pytest.mark.parametrize("repo_id", ["canonical_dataset_name", "org-name/dataset-name"])
@pytest.mark.parametrize("filename", ["filename.csv", "filename with blanks.csv"])
@pytest.mark.parametrize("revision", [None, "v2"])
def te... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_readme_util.py | import re
import tempfile
from pathlib import Path
import pytest
import yaml
from datasets.utils.readme import ReadMe
# @pytest.fixture
# def example_yaml_structure():
example_yaml_structure = yaml.safe_load(
"""\
name: ""
allow_empty: false
allow_empty_text: true
subsections:
- name: "Dataset Card for X" #... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_patching.py | from datasets.utils.patching import _PatchedModuleObj, patch_submodule
from . import _test_patching
def test_patch_submodule():
import os as original_os
from os import path as original_path
from os import rename as original_rename
from os.path import dirname as original_dirname
from os.path impor... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_info.py | import os
import pytest
import yaml
from datasets.features.features import Features, Value
from datasets.info import DatasetInfo, DatasetInfosDict
@pytest.mark.parametrize(
"files",
[
["full:README.md", "dataset_infos.json"],
["empty:README.md", "dataset_infos.json"],
["dataset_infos... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/conftest.py | import pytest
import datasets
import datasets.config
# Import fixture modules as plugins
pytest_plugins = ["tests.fixtures.files", "tests.fixtures.hub", "tests.fixtures.fsspec"]
def pytest_collection_modifyitems(config, items):
# Mark tests as "unit" by default if not marked as "integration" (or already marked... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/tests/test_filelock.py | import os
from datasets.utils._filelock import FileLock
def test_long_path(tmpdir):
filename = "a" * 1000 + ".lock"
lock1 = FileLock(str(tmpdir / filename))
assert lock1.lock_file.endswith(".lock")
assert not lock1.lock_file.endswith(filename)
assert len(os.path.basename(lock1.lock_file)) <= 255
| 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/io/test_json.py | import io
import json
import fsspec
import pytest
from datasets import Dataset, DatasetDict, Features, NamedSplit, Value
from datasets.io.json import JsonDatasetReader, JsonDatasetWriter
from ..utils import assert_arrow_memory_doesnt_increase, assert_arrow_memory_increases
def _check_json_dataset(dataset, expected... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/io/test_sql.py | import contextlib
import os
import sqlite3
import pytest
from datasets import Dataset, Features, Value
from datasets.io.sql import SqlDatasetReader, SqlDatasetWriter
from ..utils import assert_arrow_memory_doesnt_increase, assert_arrow_memory_increases, require_sqlalchemy
def _check_sql_dataset(dataset, expected_f... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/io/test_text.py | import pytest
from datasets import Dataset, DatasetDict, Features, NamedSplit, Value
from datasets.io.text import TextDatasetReader
from ..utils import assert_arrow_memory_doesnt_increase, assert_arrow_memory_increases
def _check_text_dataset(dataset, expected_features):
assert isinstance(dataset, Dataset)
... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/io/test_parquet.py | import pyarrow.parquet as pq
import pytest
from datasets import Audio, Dataset, DatasetDict, Features, IterableDatasetDict, NamedSplit, Sequence, Value, config
from datasets.features.image import Image
from datasets.info import DatasetInfo
from datasets.io.parquet import ParquetDatasetReader, ParquetDatasetWriter, get... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/io/test_csv.py | import csv
import os
import pytest
from datasets import Dataset, DatasetDict, Features, NamedSplit, Value
from datasets.io.csv import CsvDatasetReader, CsvDatasetWriter
from ..utils import assert_arrow_memory_doesnt_increase, assert_arrow_memory_increases
def _check_csv_dataset(dataset, expected_features):
ass... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_cache.py | from pathlib import Path
import pytest
from datasets import load_dataset
from datasets.packaged_modules.cache.cache import Cache
SAMPLE_DATASET_TWO_CONFIG_IN_METADATA = "hf-internal-testing/audiofolder_two_configs_in_metadata"
def test_cache(text_dir: Path):
ds = load_dataset(str(text_dir))
hash = Path(ds... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_folder_based_builder.py | import importlib
import shutil
import textwrap
import pytest
from datasets import ClassLabel, DownloadManager, Features, Value
from datasets.data_files import DataFilesDict, get_data_patterns
from datasets.download.streaming_download_manager import StreamingDownloadManager
from datasets.packaged_modules.folder_based_... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_spark.py | from unittest.mock import patch
import pyspark
from datasets.packaged_modules.spark.spark import (
Spark,
SparkExamplesIterable,
_generate_iterable_examples,
)
from ..utils import (
require_dill_gt_0_3_2,
require_not_windows,
)
def _get_expected_row_ids_and_row_dicts_for_partition_order(df, par... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_webdataset.py | import json
import tarfile
import numpy as np
import pytest
from datasets import Audio, DownloadManager, Features, Image, Value
from datasets.packaged_modules.webdataset.webdataset import WebDataset
from ..utils import require_pil, require_sndfile
@pytest.fixture
def image_wds_file(tmp_path, image_file):
json_... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_imagefolder.py | import shutil
import textwrap
import numpy as np
import pytest
from datasets import ClassLabel, Features, Image, Value
from datasets.data_files import DataFilesDict, get_data_patterns
from datasets.download.streaming_download_manager import StreamingDownloadManager
from datasets.packaged_modules.imagefolder.imagefold... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_json.py | import textwrap
import pyarrow as pa
import pytest
from datasets import Features, Value
from datasets.packaged_modules.json.json import Json
@pytest.fixture
def jsonl_file(tmp_path):
filename = tmp_path / "file.jsonl"
data = textwrap.dedent(
"""\
{"col_1": -1}
{"col_1": 1, "col_2": 2... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_audiofolder.py | import shutil
import textwrap
import librosa
import numpy as np
import pytest
import soundfile as sf
from datasets import Audio, ClassLabel, Features, Value
from datasets.data_files import DataFilesDict, get_data_patterns
from datasets.download.streaming_download_manager import StreamingDownloadManager
from datasets.... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_text.py | import textwrap
import pyarrow as pa
import pytest
from datasets import Features, Image
from datasets.packaged_modules.text.text import Text
from ..utils import require_pil
@pytest.fixture
def text_file(tmp_path):
filename = tmp_path / "text.txt"
data = textwrap.dedent(
"""\
Lorem ipsum dol... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/packaged_modules/test_csv.py | import os
import textwrap
import pyarrow as pa
import pytest
from datasets import ClassLabel, Features, Image
from datasets.packaged_modules.csv.csv import Csv
from ..utils import require_pil
@pytest.fixture
def csv_file(tmp_path):
filename = tmp_path / "file.csv"
data = textwrap.dedent(
"""\
... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/features/test_audio.py | import os
import tarfile
import pyarrow as pa
import pytest
from datasets import Dataset, concatenate_datasets, load_dataset
from datasets.features import Audio, Features, Sequence, Value
from ..utils import (
require_sndfile,
)
@pytest.fixture()
def tar_wav_path(shared_datadir, tmp_path_factory):
audio_pa... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/features/test_image.py | import os
import tarfile
import warnings
import numpy as np
import pandas as pd
import pyarrow as pa
import pytest
from datasets import Dataset, Features, Image, Sequence, Value, concatenate_datasets, load_dataset
from datasets.features.image import encode_np_array, image_to_bytes
from ..utils import require_pil
@... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/features/test_array_xd.py | import os
import random
import tempfile
import unittest
import numpy as np
import pandas as pd
import pyarrow as pa
import pytest
from absl.testing import parameterized
import datasets
from datasets.arrow_writer import ArrowWriter
from datasets.features import Array2D, Array3D, Array4D, Array5D, Value
from datasets.f... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/features/test_features.py | import datetime
from unittest import TestCase
from unittest.mock import patch
import numpy as np
import pandas as pd
import pyarrow as pa
import pytest
from datasets import Array2D
from datasets.arrow_dataset import Dataset
from datasets.features import Audio, ClassLabel, Features, Image, Sequence, Value
from dataset... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/distributed_scripts/run_torch_distributed.py | import os
from argparse import ArgumentParser
from typing import List
import torch.utils.data
from datasets import Dataset, IterableDataset
from datasets.distributed import split_dataset_by_node
NUM_SHARDS = 4
NUM_ITEMS_PER_SHARD = 3
class FailedTestError(RuntimeError):
pass
def gen(shards: List[str]):
... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/fixtures/fsspec.py | import posixpath
from pathlib import Path
from unittest.mock import patch
import pytest
from fsspec.implementations.local import AbstractFileSystem, LocalFileSystem, stringify_path
from fsspec.registry import _registry as _fsspec_registry
class MockFileSystem(AbstractFileSystem):
protocol = "mock"
def __ini... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/fixtures/files.py | import contextlib
import csv
import json
import os
import sqlite3
import tarfile
import textwrap
import zipfile
import pyarrow as pa
import pyarrow.parquet as pq
import pytest
import datasets
import datasets.config
# dataset + arrow_file
@pytest.fixture(scope="session")
def dataset():
n = 10
features = da... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/fixtures/hub.py | import os
import time
import uuid
from contextlib import contextmanager
from typing import Optional
import pytest
import requests
from huggingface_hub.hf_api import HfApi, RepositoryNotFoundError
CI_HUB_USER = "__DUMMY_TRANSFORMERS_USER__"
CI_HUB_USER_FULL_NAME = "Dummy User"
CI_HUB_USER_TOKEN = "hf_hZEmnoOEYISjraJt... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/commands/test_test.py | import os
from collections import namedtuple
import pytest
from datasets import ClassLabel, Features, Sequence, Value
from datasets.commands.test import TestCommand
from datasets.info import DatasetInfo, DatasetInfosDict
_TestCommandArgs = namedtuple(
"_TestCommandArgs",
[
"dataset",
"name",... | 0 |
hf_public_repos/datasets/tests | hf_public_repos/datasets/tests/commands/conftest.py | import pytest
DATASET_LOADING_SCRIPT_NAME = "__dummy_dataset1__"
DATASET_LOADING_SCRIPT_CODE = """
import json
import os
import datasets
REPO_URL = "https://huggingface.co/datasets/hf-internal-testing/raw_jsonl/resolve/main/"
URLS = {"train": REPO_URL + "wikiann-bn-train.jsonl", "validation": REPO_URL + "wikiann-... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/docs/README.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/access.mdx | # Know your dataset
There are two types of dataset objects, a regular [`Dataset`] and then an ✨ [`IterableDataset`] ✨. A [`Dataset`] provides fast random access to the rows, and memory-mapping so that loading even large datasets only uses a relatively small amount of device memory. But for really, really big datasets ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/load_hub.mdx | # Load a dataset from the Hub
Finding high-quality datasets that are reproducible and accessible can be difficult. One of 🤗 Datasets main goals is to provide a simple way to load a dataset of any format or type. The easiest way to get started is to discover an existing dataset on the [Hugging Face Hub](https://huggin... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/faiss_es.mdx | # Search index
[FAISS](https://github.com/facebookresearch/faiss) and [Elasticsearch](https://www.elastic.co/elasticsearch/) enables searching for examples in a dataset. This can be useful when you want to retrieve specific examples from a dataset that are relevant to your NLP task. For example, if you are working on ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/image_load.mdx | # Load image data
Image datasets have [`Image`] type columns, which contain PIL objects.
<Tip>
To work with image datasets, you need to have the `vision` dependency installed. Check out the [installation](./installation#vision) guide to learn how to install it.
</Tip>
When you load an image dataset and call the i... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/dataset_script.mdx | # Create a dataset loading script
<Tip>
The dataset loading script is likely not needed if your dataset is in one of the following formats: CSV, JSON, JSON lines, text, images, audio or Parquet.
With those formats, you should be able to load your dataset automatically with [`~datasets.load_dataset`],
as long as your... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/loading.mdx | # Load
Your data can be stored in various places; they can be on your local machine's disk, in a Github repository, and in in-memory data structures like Python dictionaries and Pandas DataFrames. Wherever a dataset is stored, 🤗 Datasets can help you load it.
This guide will show you how to load a dataset from:
- T... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/use_with_pytorch.mdx | # Use with PyTorch
This document is a quick introduction to using `datasets` with PyTorch, with a particular focus on how to get
`torch.Tensor` objects out of our datasets, and how to use a PyTorch `DataLoader` and a Hugging Face `Dataset`
with the best performance.
## Dataset format
By default, datasets return regu... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/create_dataset.mdx | # Create a dataset
Sometimes, you may need to create a dataset if you're working with your own data. Creating a dataset with 🤗 Datasets confers all the advantages of the library to your dataset: fast loading and processing, [stream enormous datasets](stream), [memory-mapping](https://huggingface.co/course/chapter5/4?... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/tutorial.md | # Overview
Welcome to the 🤗 Datasets tutorials! These beginner-friendly tutorials will guide you through the fundamentals of working with 🤗 Datasets. You'll load and prepare a dataset for training with your machine learning framework of choice. Along the way, you'll learn how to load different dataset configurations... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/dataset_card.mdx | # Create a dataset card
Each dataset should have a dataset card to promote responsible usage and inform users of any potential biases within the dataset.
This idea was inspired by the Model Cards proposed by [Mitchell, 2018](https://arxiv.org/abs/1810.03993).
Dataset cards help users understand a dataset's contents, t... | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.