Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      Bad split: train_rich. Available splits: ['train', 'validation', 'test']
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 61, in get_rows
                  ds = load_dataset(
                       ^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1505, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1196, in as_streaming_dataset
                  raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
              ValueError: Bad split: train_rich. Available splits: ['train', 'validation', 'test']

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Sanskrit POS Tagging Dataset

This dataset is designed for fine-tuning the SanskritBERT model on Part-of-Speech (POS) tagging tasks. It contains sentences tokenized and annotated with POS tags.

The dataset provides two variants of annotations:

  1. Rich: Contains detailed morphological information or fine-grained POS tags, suitable for tasks requiring deep linguistic analysis.
  2. Simple: Contains simplified POS tags (likely Universal POS tags), suitable for general-purpose tagging and lighter models.

Dataset Structure

The data is provided in JSONL (JSON Lines) format. Each line represents a single sentence entry.

Data Fields

  • id: A unique identifier for the sentence.
  • tokens: A list of strings, where each string is a word token in the sentence.
  • labels: A list of strings corresponding to the POS tags for each token.

File Organization

The dataset is split into rich and simple subsets, each with train, test, and valid splits:

Rich Tags (Detailed)

  • pos_rich_train.jsonl
  • pos_rich_test.jsonl
  • pos_rich_valid.jsonl

Simple Tags (Simplified)

  • pos_simple_train.jsonl
  • pos_simple_test.jsonl
  • pos_simple_valid.jsonl

Usage

You can load this dataset using the Hugging Face datasets library:

from datasets import load_dataset

# Load the simple dataset
dataset_simple = load_dataset("json", data_files={
    "train": "pos_simple_train.jsonl",
    "test": "pos_simple_test.jsonl",
    "validation": "pos_simple_valid.jsonl"
})

# Load the rich dataset
dataset_rich = load_dataset("json", data_files={
    "train": "pos_rich_train.jsonl",
    "test": "pos_rich_test.jsonl",
    "validation": "pos_rich_valid.jsonl"
})

Citation

If you use this dataset in your research, please cite the SanskritBERT paper or the source of this data.

Downloads last month
58

Models trained or fine-tuned on tanuj437/sanskrit-pos-tagged-corpus