tanuj437/sanskrit-bert-pos
Token Classification • 21.3M • Updated • 5
Error code: StreamingRowsError
Exception: ValueError
Message: Bad split: train_rich. Available splits: ['train', 'validation', 'test']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 61, in get_rows
ds = load_dataset(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1505, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1196, in as_streaming_dataset
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train_rich. Available splits: ['train', 'validation', 'test']Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This dataset is designed for fine-tuning the SanskritBERT model on Part-of-Speech (POS) tagging tasks. It contains sentences tokenized and annotated with POS tags.
The dataset provides two variants of annotations:
The data is provided in JSONL (JSON Lines) format. Each line represents a single sentence entry.
id: A unique identifier for the sentence.tokens: A list of strings, where each string is a word token in the sentence.labels: A list of strings corresponding to the POS tags for each token.The dataset is split into rich and simple subsets, each with train, test, and valid splits:
pos_rich_train.jsonlpos_rich_test.jsonlpos_rich_valid.jsonlpos_simple_train.jsonlpos_simple_test.jsonlpos_simple_valid.jsonlYou can load this dataset using the Hugging Face datasets library:
from datasets import load_dataset
# Load the simple dataset
dataset_simple = load_dataset("json", data_files={
"train": "pos_simple_train.jsonl",
"test": "pos_simple_test.jsonl",
"validation": "pos_simple_valid.jsonl"
})
# Load the rich dataset
dataset_rich = load_dataset("json", data_files={
"train": "pos_rich_train.jsonl",
"test": "pos_rich_test.jsonl",
"validation": "pos_rich_valid.jsonl"
})
If you use this dataset in your research, please cite the SanskritBERT paper or the source of this data.