repo_id stringlengths 15 89 | file_path stringlengths 27 180 | content stringlengths 1 2.23M | __index_level_0__ int64 0 0 |
|---|---|---|---|
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/stream.mdx | # Stream
Dataset streaming lets you work with a dataset without downloading it.
The data is streamed as you iterate over the dataset.
This is especially helpful when:
- You don't want to wait for an extremely large dataset to download.
- The dataset size exceeds the amount of available disk space on your computer.
- ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/nlp_load.mdx | # Load text data
This guide shows you how to load text datasets. To learn how to load any type of dataset, take a look at the <a class="underline decoration-sky-400 decoration-2 font-semibold" href="./loading">general loading guide</a>.
Text files are one of the most common file types for storing a dataset. By defaul... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/about_dataset_load.mdx | # Build and load
Nearly every deep learning workflow begins with loading a dataset, which makes it one of the most important steps. With 🤗 Datasets, there are more than 900 datasets available to help you get started with your NLP task. All you have to do is call: [`load_dataset`] to take your first step. This functio... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/about_map_batch.mdx | # Batch mapping
Combining the utility of [`Dataset.map`] with batch mode is very powerful. It allows you to speed up processing, and freely control the size of the generated dataset.
## Need for speed
The primary objective of batch mapping is to speed up processing. Often times, it is faster to work with batches of... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/installation.md | # Installation
Before you start, you'll need to setup your environment and install the appropriate packages. 🤗 Datasets is tested on **Python 3.7+**.
<Tip>
If you want to use 🤗 Datasets with TensorFlow or PyTorch, you'll need to install them separately. Refer to the [TensorFlow installation page](https://www.tenso... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/troubleshoot.mdx | # Troubleshooting
This guide aims to provide you the tools and knowledge required to navigate some common issues. If the suggestions listed
in this guide do not cover your such situation, please refer to the [Asking for Help](#asking-for-help) section to learn where to
find help with your specific issue.
## Issues w... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/about_arrow.md | # Datasets 🤝 Arrow
## What is Arrow?
[Arrow](https://arrow.apache.org/) enables large amounts of data to be processed and moved quickly. It is a specific data format that stores data in a columnar memory layout. This provides several significant advantages:
* Arrow's standard format allows [zero-copy reads](https:/... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/share.mdx | # Share a dataset using the CLI
At Hugging Face, we are on a mission to democratize good Machine Learning and we believe in the value of open source. That's why we designed 🤗 Datasets so that anyone can share a dataset with the greater ML community. There are currently thousands of datasets in over 100 languages in t... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/nlp_process.mdx | # Process text data
This guide shows specific methods for processing text datasets. Learn how to:
- Tokenize a dataset with [`~Dataset.map`].
- Align dataset labels with label ids for NLI datasets.
For a guide on how to process any type of dataset, take a look at the <a class="underline decoration-sky-400 decoration... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/about_mapstyle_vs_iterable.mdx | # Differences between Dataset and IterableDataset
There are two types of dataset objects, a [`Dataset`] and an [`IterableDataset`].
Whichever type of dataset you choose to use or create depends on the size of the dataset.
In general, an [`IterableDataset`] is ideal for big datasets (think hundreds of GBs!) due to its ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/index.mdx | # Datasets
<img class="float-left !m-0 !border-0 !dark:border-0 !shadow-none !max-w-lg w-[150px]" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/datasets_logo.png"/>
🤗 Datasets is a library for easily accessing and sharing datasets for Audio, Computer Vision, and Natural ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/how_to.md | # Overview
The how-to guides offer a more comprehensive overview of all the tools 🤗 Datasets offers and how to use them. This will help you tackle messier real-world datasets where you may need to manipulate the dataset structure or content to get it ready for training.
The guides assume you are familiar and comfort... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/repository_structure.mdx | # Structure your repository
To host and share your dataset, create a dataset repository on the Hugging Face Hub and upload your data files.
This guide will show you how to structure your dataset repository when you upload it.
A dataset with a supported structure and file format (`.txt`, `.csv`, `.parquet`, `.jsonl`, ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/tabular_load.mdx | # Load tabular data
A tabular dataset is a generic dataset used to describe any data stored in rows and columns, where the rows represent an example and the columns represent a feature (can be continuous or categorical). These datasets are commonly stored in CSV files, Pandas DataFrames, and in database tables. This g... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/depth_estimation.mdx | # Depth estimation
Depth estimation datasets are used to train a model to approximate the relative distance of every pixel in an
image from the camera, also known as depth. The applications enabled by these datasets primarily lie in areas like visual machine
perception and perception in robotics. Example applications ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/audio_process.mdx | # Process audio data
This guide shows specific methods for processing audio datasets. Learn how to:
- Resample the sampling rate.
- Use [`~Dataset.map`] with audio datasets.
For a guide on how to process any type of dataset, take a look at the <a class="underline decoration-sky-400 decoration-2 font-semibold" href="... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/audio_load.mdx | # Load audio data
You can load an audio dataset using the [`Audio`] feature that automatically decodes and resamples the audio files when you access the examples.
Audio decoding is based on the [`soundfile`](https://github.com/bastibe/python-soundfile) python package, which uses the [`libsndfile`](https://github.com/l... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/_redirects.yml | # This first_section was backported from nginx
loading_datasets: loading
share_dataset: share
quicktour: quickstart
dataset_streaming: stream
torch_tensorflow: use_dataset
splits: loading#slice-splits
processing: process
faiss_and_ea: faiss_es
features: about_dataset_features
using_metrics: how_to_metrics
exploring: ac... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/about_cache.mdx | # The cache
The cache is one of the reasons why 🤗 Datasets is so efficient. It stores previously downloaded and processed datasets so when you need to use them again, they are reloaded directly from the cache. This avoids having to download a dataset all over again, or reapplying processing functions. Even after you ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/process.mdx | # Process
🤗 Datasets provides many tools for modifying the structure and content of a dataset. These tools are important for tidying up a dataset, creating additional columns, converting between features and formats, and much more.
This guide will show you how to:
- Reorder rows and split the dataset.
- Rename and ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/use_with_jax.mdx | # Use with JAX
This document is a quick introduction to using `datasets` with JAX, with a particular focus on how to get
`jax.Array` objects out of our datasets, and how to use them to train JAX models.
<Tip>
`jax` and `jaxlib` are required to reproduce to code above, so please make sure you
install them as `pip ins... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/metrics.mdx | # Evaluate predictions
<Tip warning={true}>
Metrics is deprecated in 🤗 Datasets. To learn more about how to use metrics, take a look at the library 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index)! In addition to metrics, you can find more tools for evaluating models and datasets.
</Tip>
🤗 Datasets provi... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/_config.py | # docstyle-ignore
INSTALL_CONTENT = """
# Datasets installation
! pip install datasets transformers
# To install from source instead of the last release, comment the command above and uncomment the following one.
# ! pip install git+https://github.com/huggingface/datasets.git
"""
notebook_first_cells = [{"type": "code... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/image_classification.mdx | # Image classification
Image classification datasets are used to train a model to classify an entire image. There are a wide variety of applications enabled by these datasets such as identifying endangered wildlife species or screening for disease in medical images. This guide will show you how to apply transformation... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/semantic_segmentation.mdx | # Semantic segmentation
Semantic segmentation datasets are used to train a model to classify every pixel in an image. There are
a wide variety of applications enabled by these datasets such as background removal from images, stylizing
images, or scene understanding for autonomous driving. This guide will show you how ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/filesystems.mdx | # Cloud storage
🤗 Datasets supports access to cloud storage providers through a `fsspec` FileSystem implementations.
You can save and load datasets from any cloud storage in a Pythonic way.
Take a look at the following table for some example of supported cloud storage providers:
| Storage provider | Filesystem i... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/image_process.mdx | # Process image data
This guide shows specific methods for processing image datasets. Learn how to:
- Use [`~Dataset.map`] with image dataset.
- Apply data augmentations to a dataset with [`~Dataset.set_transform`].
For a guide on how to process any type of dataset, take a look at the <a class="underline decoration-... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/use_dataset.mdx | # Preprocess
In addition to loading datasets, 🤗 Datasets other main goal is to offer a diverse set of preprocessing functions to get a dataset into an appropriate format for training with your machine learning framework.
There are many possible ways to preprocess a dataset, and it all depends on your specific datas... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/use_with_spark.mdx | # Use with Spark
This document is a quick introduction to using 🤗 Datasets with Spark, with a particular focus on how to load a Spark DataFrame into a [`Dataset`] object.
From there, you have fast access to any element and you can use it as a data loader to train models.
## Load from Spark
A [`Dataset`] object is ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/image_dataset.mdx | # Create an image dataset
There are two methods for creating and sharing an image dataset. This guide will show you how to:
* Create an image dataset with `ImageFolder` and some metadata. This is a no-code solution for quickly creating an image dataset with several thousand images.
* Create an image dataset by writin... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/beam.mdx | # Beam Datasets
Some datasets are too large to be processed on a single machine. Instead, you can process them with [Apache Beam](https://beam.apache.org/), a library for parallel data processing. The processing pipeline is executed on a distributed processing backend such as [Apache Flink](https://flink.apache.org/),... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/use_with_tensorflow.mdx | # Using Datasets with TensorFlow
This document is a quick introduction to using `datasets` with TensorFlow, with a particular focus on how to get
`tf.Tensor` objects out of our datasets, and how to stream data from Hugging Face `Dataset` objects to Keras methods
like `model.fit()`.
## Dataset format
By default, data... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/about_dataset_features.mdx | # Dataset features
[`Features`] defines the internal structure of a dataset. It is used to specify the underlying serialization format. What's more interesting to you though is that [`Features`] contains high-level information about everything from the column names and types, to the [`ClassLabel`]. You can think of [`... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/cache.mdx | # Cache management
When you download a dataset, the processing scripts and data are stored locally on your computer. The cache allows 🤗 Datasets to avoid re-downloading or processing the entire dataset every time you use it.
This guide will show you how to:
- Change the cache directory.
- Control how a dataset is ... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/audio_dataset.mdx | # Create an audio dataset
You can share a dataset with your team or with anyone in the community by creating a dataset repository on the Hugging Face Hub:
```py
from datasets import load_dataset
dataset = load_dataset("<username>/my_dataset")
```
There are several methods for creating and sharing an audio dataset:
... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/_toctree.yml | - sections:
- local: index
title: 🤗 Datasets
- local: quickstart
title: Quickstart
- local: installation
title: Installation
title: Get started
- sections:
- local: tutorial
title: Overview
- local: load_hub
title: Load a dataset from the Hub
- local: access
title: Know your data... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/upload_dataset.mdx | # Share a dataset to the Hub
The [Hub](https://huggingface.co/datasets) is home to an extensive collection of community-curated and popular research datasets. We encourage you to share your dataset to the Hub to help grow the ML community and accelerate progress for everyone. All contributions are welcome; adding a da... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/quickstart.mdx | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/about_metrics.mdx | # All about metrics
<Tip warning={true}>
Metrics is deprecated in 🤗 Datasets. To learn more about how to use metrics, take a look at the library 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index)! In addition to metrics, you can find more tools for evaluating models and datasets.
</Tip>
🤗 Datasets provides... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/object_detection.mdx | # Object detection
Object detection models identify something in an image, and object detection datasets are used for applications such as autonomous driving and detecting natural hazards like wildfire. This guide will show you how to apply transformations to an object detection dataset following the [tutorial](https:... | 0 |
hf_public_repos/datasets/docs | hf_public_repos/datasets/docs/source/how_to_metrics.mdx | # Metrics
<Tip warning={true}>
Metrics is deprecated in 🤗 Datasets. To learn more about how to use metrics, take a look at the library 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index)! In addition to metrics, you can find more tools for evaluating models and datasets.
</Tip>
Metrics are important for eval... | 0 |
hf_public_repos/datasets/docs/source | hf_public_repos/datasets/docs/source/package_reference/builder_classes.mdx | # Builder classes
## Builders
🤗 Datasets relies on two main classes during the dataset building process: [`DatasetBuilder`] and [`BuilderConfig`].
[[autodoc]] datasets.DatasetBuilder
[[autodoc]] datasets.GeneratorBasedBuilder
[[autodoc]] datasets.BeamBasedBuilder
[[autodoc]] datasets.ArrowBasedBuilder
[[autodoc... | 0 |
hf_public_repos/datasets/docs/source | hf_public_repos/datasets/docs/source/package_reference/table_classes.mdx | # Table Classes
Each `Dataset` object is backed by a PyArrow Table.
A Table can be loaded from either the disk (memory mapped) or in memory.
Several Table types are available, and they all inherit from [`table.Table`].
## Table
[[autodoc]] datasets.table.Table
- validate
- equals
- to_batches
- to_py... | 0 |
hf_public_repos/datasets/docs/source | hf_public_repos/datasets/docs/source/package_reference/main_classes.mdx | # Main classes
## DatasetInfo
[[autodoc]] datasets.DatasetInfo
## Dataset
The base class [`Dataset`] implements a Dataset backed by an Apache Arrow table.
[[autodoc]] datasets.Dataset
- add_column
- add_item
- from_file
- from_buffer
- from_pandas
- from_dict
- from_generator
- dat... | 0 |
hf_public_repos/datasets/docs/source | hf_public_repos/datasets/docs/source/package_reference/loading_methods.mdx | # Loading methods
Methods for listing and loading datasets and metrics:
## Datasets
[[autodoc]] datasets.list_datasets
[[autodoc]] datasets.load_dataset
[[autodoc]] datasets.load_from_disk
[[autodoc]] datasets.load_dataset_builder
[[autodoc]] datasets.get_dataset_config_names
[[autodoc]] datasets.get_dataset_in... | 0 |
hf_public_repos/datasets/docs/source | hf_public_repos/datasets/docs/source/package_reference/utilities.mdx | # Utilities
## Configure logging
🤗 Datasets strives to be transparent and explicit about how it works, but this can be quite verbose at times. We have included a series of logging methods which allow you to easily adjust the level of verbosity of the entire library. Currently the default verbosity of the library is ... | 0 |
hf_public_repos/datasets/docs/source | hf_public_repos/datasets/docs/source/package_reference/task_templates.mdx | # Task templates
<Tip warning={true}>
The Task API is deprecated in favor of [`train-eval-index`](https://github.com/huggingface/hub-docs/blob/9ab2555e1c146122056aba6f89af404a8bc9a6f1/datasetcard.md?plain=1#L90-L106) and will be removed in the next major release.
</Tip>
The tasks supported by [`Dataset.prepare_for_... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/templates/metric_card_template.md | # Metric Card for *Current Metric*
***Metric Card Instructions:*** *Copy this file into the relevant metric folder, then fill it out and save it as README.md. Feel free to take a look at existing metric cards if you'd like examples.*
## Metric Description
*Give a brief overview of this metric.*
## How to Use
*Give g... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/templates/new_dataset_script.py | # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/templates/README.md | ---
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#d... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/templates/README_guide.md | ---
YAML tags (full spec here: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1):
- copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#datas... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/notebooks/README.md | <!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or ... | 0 |
hf_public_repos/datasets | hf_public_repos/datasets/notebooks/Overview.ipynb | # install datasets
!pip install datasets# Let's import the library. We typically only need at most two methods:
from datasets import list_datasets, load_dataset
from pprint import pprint# Currently available datasets
datasets = list_datasets()
print(f"🤩 Currently {len(datasets)} datasets are available on the hub:")
... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/inspect.py | # Copyright 2020 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/streaming.py | import importlib
import inspect
from functools import wraps
from typing import TYPE_CHECKING, Optional
from .download.download_config import DownloadConfig
from .download.streaming_download_manager import (
xbasename,
xdirname,
xet_parse,
xexists,
xgetsize,
xglob,
xgzip_open,
xisdir,
... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/table.py | import copy
import os
import warnings
from functools import partial
from itertools import groupby
from typing import TYPE_CHECKING, Callable, Iterator, List, Optional, Tuple, TypeVar, Union
import numpy as np
import pyarrow as pa
import pyarrow.compute as pc
from . import config
from .utils.logging import get_logger
... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/arrow_reader.py | # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# U... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/load.py | # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# U... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/info.py | # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# U... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/arrow_dataset.py | # Copyright 2020 The HuggingFace Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/builder.py | # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# U... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/config.py | import importlib
import importlib.metadata
import logging
import os
import platform
from pathlib import Path
from typing import Optional
from packaging import version
logger = logging.getLogger(__name__.split(".", 1)[0]) # to avoid circular import from .utils.logging
# Datasets
S3_DATASETS_BUCKET_PREFIX = "https:/... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/exceptions.py | # SPDX-License-Identifier: Apache-2.0
# Copyright 2023 The HuggingFace Authors.
from typing import Any, Dict, List, Optional, Union
from huggingface_hub import HfFileSystem
from . import config
from .table import CastError
from .utils.track import TrackedIterable, tracked_list, tracked_str
class DatasetsError(Excep... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/naming.py | # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# U... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/splits.py | # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# U... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/metric.py | # Copyright 2020 The HuggingFace Datasets Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or a... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/__init__.py | # flake8: noqa
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LI... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/dataset_dict.py | import contextlib
import copy
import fnmatch
import json
import math
import posixpath
import re
import warnings
from io import BytesIO
from pathlib import Path
from typing import Callable, Dict, List, Optional, Sequence, Tuple, Union
import fsspec
import numpy as np
from huggingface_hub import (
CommitInfo,
Co... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/combine.py | from typing import List, Optional, TypeVar
from .arrow_dataset import Dataset, _concatenate_map_style_datasets, _interleave_map_style_datasets
from .dataset_dict import DatasetDict, IterableDatasetDict
from .info import DatasetInfo
from .iterable_dataset import IterableDataset, _concatenate_iterable_datasets, _interle... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/fingerprint.py | import inspect
import os
import random
import shutil
import tempfile
import weakref
from functools import wraps
from pathlib import Path
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Union
import numpy as np
import xxhash
from . import config
from .naming import INVALID_WINDOWS_CHARACT... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/data_files.py | import os
import re
from functools import partial
from glob import has_magic
from pathlib import Path, PurePath
from typing import Callable, Dict, List, Optional, Set, Tuple, Union
import huggingface_hub
from fsspec import get_fs_token_paths
from fsspec.implementations.http import HTTPFileSystem
from huggingface_hub i... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/search.py | import importlib.util
import os
import tempfile
from pathlib import PurePath
from typing import TYPE_CHECKING, Dict, List, NamedTuple, Optional, Union
import fsspec
import numpy as np
from .utils import logging
from .utils import tqdm as hf_tqdm
if TYPE_CHECKING:
from .arrow_dataset import Dataset # noqa: F401... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/distributed.py | from typing import TypeVar
from .arrow_dataset import Dataset, _split_by_node_map_style_dataset
from .iterable_dataset import IterableDataset, _split_by_node_iterable_dataset
DatasetType = TypeVar("DatasetType", Dataset, IterableDataset)
def split_dataset_by_node(dataset: DatasetType, rank: int, world_size: int) -... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/keyhash.py | # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# U... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/arrow_writer.py | # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# Unless required by applicable law or agreed to in wr... | 0 |
hf_public_repos/datasets/src | hf_public_repos/datasets/src/datasets/iterable_dataset.py | import copy
import itertools
import sys
import warnings
from collections import Counter
from copy import deepcopy
from dataclasses import dataclass
from functools import partial
from itertools import cycle, islice
from typing import Any, Callable, Dict, Iterable, Iterator, List, Optional, Tuple, Union
import numpy as ... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/parallel/__init__.py | from .parallel import parallel_backend, parallel_map, ParallelBackendConfig # noqa F401
| 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/parallel/parallel.py | import contextlib
from multiprocessing import Pool, RLock
from tqdm.auto import tqdm
from ..utils import experimental, logging
logger = logging.get_logger(__name__)
class ParallelBackendConfig:
backend_name = None
@experimental
def parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, single... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/io/parquet.py | import os
from typing import BinaryIO, Optional, Union
import numpy as np
import pyarrow.parquet as pq
from .. import Audio, Dataset, Features, Image, NamedSplit, Value, config
from ..features.features import FeatureType, _visit
from ..formatting import query_table
from ..packaged_modules import _PACKAGED_DATASETS_MO... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/io/abc.py | from abc import ABC, abstractmethod
from typing import Optional, Union
from .. import Dataset, DatasetDict, Features, IterableDataset, IterableDatasetDict, NamedSplit
from ..utils.typing import NestedDataStructureLike, PathLike
class AbstractDatasetReader(ABC):
def __init__(
self,
path_or_paths: ... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/io/csv.py | import multiprocessing
import os
from typing import BinaryIO, Optional, Union
from .. import Dataset, Features, NamedSplit, config
from ..formatting import query_table
from ..packaged_modules.csv.csv import Csv
from ..utils import tqdm as hf_tqdm
from ..utils.typing import NestedDataStructureLike, PathLike
from .abc i... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/io/generator.py | from typing import Callable, Optional
from .. import Features
from ..packaged_modules.generator.generator import Generator
from .abc import AbstractDatasetInputStream
class GeneratorDatasetInputStream(AbstractDatasetInputStream):
def __init__(
self,
generator: Callable,
features: Optional... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/io/sql.py | import multiprocessing
from typing import TYPE_CHECKING, Optional, Union
from .. import Dataset, Features, config
from ..formatting import query_table
from ..packaged_modules.sql.sql import Sql
from ..utils import tqdm as hf_tqdm
from .abc import AbstractDatasetInputStream
if TYPE_CHECKING:
import sqlite3
i... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/io/json.py | import multiprocessing
import os
from typing import BinaryIO, Optional, Union
import fsspec
from .. import Dataset, Features, NamedSplit, config
from ..formatting import query_table
from ..packaged_modules.json.json import Json
from ..utils import tqdm as hf_tqdm
from ..utils.typing import NestedDataStructureLike, Pa... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/io/spark.py | from typing import Optional
import pyspark
from .. import Features, NamedSplit
from ..download import DownloadMode
from ..packaged_modules.spark.spark import Spark
from .abc import AbstractDatasetReader
class SparkDatasetReader(AbstractDatasetReader):
"""A dataset reader that reads from a Spark DataFrame.
... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/io/text.py | from typing import Optional
from .. import Features, NamedSplit
from ..packaged_modules.text.text import Text
from ..utils.typing import NestedDataStructureLike, PathLike
from .abc import AbstractDatasetReader
class TextDatasetReader(AbstractDatasetReader):
def __init__(
self,
path_or_paths: Nest... | 0 |
hf_public_repos/datasets/src/datasets | hf_public_repos/datasets/src/datasets/packaged_modules/__init__.py | import inspect
import re
from typing import Dict, List, Tuple
from huggingface_hub.utils import insecure_hashlib
from .arrow import arrow
from .audiofolder import audiofolder
from .cache import cache # noqa F401
from .csv import csv
from .imagefolder import imagefolder
from .json import json
from .pandas import pand... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/imagefolder/imagefolder.py | from typing import List
import datasets
from datasets.tasks import ImageClassification
from ..folder_based_builder import folder_based_builder
logger = datasets.utils.logging.get_logger(__name__)
class ImageFolderConfig(folder_based_builder.FolderBasedBuilderConfig):
"""BuilderConfig for ImageFolder."""
... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/sql/sql.py | import sys
from dataclasses import dataclass
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
import pandas as pd
import pyarrow as pa
import datasets
import datasets.config
from datasets.features.features import require_storage_cast
from datasets.table import table_cast
if TYPE_CHECKING:
im... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/generator/generator.py | from dataclasses import dataclass
from typing import Callable, Optional
import datasets
@dataclass
class GeneratorConfig(datasets.BuilderConfig):
generator: Optional[Callable] = None
gen_kwargs: Optional[dict] = None
features: Optional[datasets.Features] = None
def __post_init__(self):
asser... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py | import collections
import itertools
import os
from dataclasses import dataclass
from typing import List, Optional, Tuple, Type
import pandas as pd
import pyarrow as pa
import pyarrow.json as paj
import datasets
from datasets.features.features import FeatureType
from datasets.tasks.base import TaskTemplate
logger = ... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/parquet/parquet.py | import itertools
from dataclasses import dataclass
from typing import List, Optional
import pyarrow as pa
import pyarrow.parquet as pq
import datasets
from datasets.table import table_cast
logger = datasets.utils.logging.get_logger(__name__)
@dataclass
class ParquetConfig(datasets.BuilderConfig):
"""BuilderCo... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/pandas/pandas.py | import itertools
from dataclasses import dataclass
from typing import Optional
import pandas as pd
import pyarrow as pa
import datasets
from datasets.table import table_cast
@dataclass
class PandasConfig(datasets.BuilderConfig):
"""BuilderConfig for Pandas."""
features: Optional[datasets.Features] = None
... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/audiofolder/audiofolder.py | from typing import List
import datasets
from datasets.tasks import AudioClassification
from ..folder_based_builder import folder_based_builder
logger = datasets.utils.logging.get_logger(__name__)
class AudioFolderConfig(folder_based_builder.FolderBasedBuilderConfig):
"""Builder Config for AudioFolder."""
... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/json/json.py | import io
import itertools
import json
from dataclasses import dataclass
from typing import Optional
import pyarrow as pa
import pyarrow.json as paj
import datasets
from datasets.table import table_cast
from datasets.utils.file_utils import readline
logger = datasets.utils.logging.get_logger(__name__)
@dataclass
... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/csv/csv.py | import itertools
from dataclasses import dataclass
from typing import Any, Callable, Dict, List, Optional, Union
import pandas as pd
import pyarrow as pa
import datasets
import datasets.config
from datasets.features.features import require_storage_cast
from datasets.table import table_cast
from datasets.utils.py_util... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/text/text.py | import itertools
import warnings
from dataclasses import InitVar, dataclass
from io import StringIO
from typing import Optional
import pyarrow as pa
import datasets
from datasets.features.features import require_storage_cast
from datasets.table import table_cast
logger = datasets.utils.logging.get_logger(__name__)
... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/cache/cache.py | import glob
import os
import shutil
import time
from pathlib import Path
from typing import List, Optional, Tuple
import pyarrow as pa
import datasets
import datasets.config
from datasets.naming import filenames_for_dataset_split
logger = datasets.utils.logging.get_logger(__name__)
def _get_modification_time(cach... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/arrow/arrow.py | import itertools
from dataclasses import dataclass
from typing import Optional
import pyarrow as pa
import datasets
from datasets.table import table_cast
logger = datasets.utils.logging.get_logger(__name__)
@dataclass
class ArrowConfig(datasets.BuilderConfig):
"""BuilderConfig for Arrow."""
features: Opt... | 0 |
hf_public_repos/datasets/src/datasets/packaged_modules | hf_public_repos/datasets/src/datasets/packaged_modules/spark/spark.py | import os
import posixpath
import uuid
from dataclasses import dataclass
from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple, Union
import numpy as np
import pyarrow as pa
import datasets
from datasets.arrow_writer import ArrowWriter, ParquetWriter
from datasets.config import MAX_SHARD_SIZE
from dataset... | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.