The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: TypeError
Message: Couldn't cast array of type
struct<subset: string, lang: string, q: double, id: string, author: string, userid: int64, title: string, length: int64, points: int64, chapters: int64, keywords: list<item: string>, isr15: int64, genre: int64, biggenre: int64, isr18: bool, nocgenre: int64>
to
{'subset': Value(dtype='string', id=None), 'lang': Value(dtype='string', id=None), 'q': Value(dtype='float64', id=None), 'id': Value(dtype='string', id=None), 'author': Value(dtype='string', id=None), 'userid': Value(dtype='int64', id=None), 'title': Value(dtype='string', id=None), 'length': Value(dtype='int64', id=None), 'points': Value(dtype='int64', id=None), 'chapters': Value(dtype='int64', id=None), 'keywords': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'isr15': Value(dtype='int64', id=None), 'genre': Value(dtype='int64', id=None), 'biggenre': Value(dtype='int64', id=None)}
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2122, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
TypeError: Couldn't cast array of type
struct<subset: string, lang: string, q: double, id: string, author: string, userid: int64, title: string, length: int64, points: int64, chapters: int64, keywords: list<item: string>, isr15: int64, genre: int64, biggenre: int64, isr18: bool, nocgenre: int64>
to
{'subset': Value(dtype='string', id=None), 'lang': Value(dtype='string', id=None), 'q': Value(dtype='float64', id=None), 'id': Value(dtype='string', id=None), 'author': Value(dtype='string', id=None), 'userid': Value(dtype='int64', id=None), 'title': Value(dtype='string', id=None), 'length': Value(dtype='int64', id=None), 'points': Value(dtype='int64', id=None), 'chapters': Value(dtype='int64', id=None), 'keywords': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'isr15': Value(dtype='int64', id=None), 'genre': Value(dtype='int64', id=None), 'biggenre': Value(dtype='int64', id=None)}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1524, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1099, in stream_convert_to_parquet
builder._prepare_split(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text string | meta dict |
|---|---|
【小説タイトル】
焼けて爛れる恋よりも、微睡む優しい愛が欲しい
【Nコード】
N5029ID
【作者名】
秋暁秋季
【あらすじ】
俺の彼女は物凄く気の多い人だった。
お眼鏡に適う奴が居れば、瞳孔を蕩けさせる人だった。
その癖照れ屋で、すぐに目を逸らす。
なんで、俺を選んだんですか。
注意事項1
起承転結はありません。
短編詐欺に思われたら申し訳御座いません。
(前書き)
注意事項1
起承転結はありません。
短編詐欺に思われたら申し訳御座いません。
俺の彼女は基本的に物凄く軽いノリではしゃぐ人だった。二次元、
三次元問わず、お眼鏡に適えば、顔を蕩けさせては視線を逸らす事
を繰り返す人だった。どうにも気が多い癖に照れ屋な様で、待... | {
"subset": "syosetu",
"lang": "ja",
"q": 0.6,
"id": "N5029ID",
"author": "秋暁秋季",
"userid": 719797,
"title": "焼けて爛れる恋よりも、微睡む優しい愛が欲しい",
"length": 871,
"points": 0,
"chapters": 1,
"keywords": [
"気が多い",
"浮気性",
"無愛想",
"照れる",
"嫉妬",
"好みではない",
"クソデカ感情",
"空気のような安心感"
],
... |
【小説タイトル】
【能力者】
【Nコード】
N9864IB
【作者名】
夢音いちご
【あらすじ】
私立アビリティ学園。
小・中・高・大が一貫となった、大規模な名門校。
そして、ここは規模の大きさだけでなく、ある特殊な制度を設けて
いることでも有名だ。
それは、各地から集められた"能力者"の育成を目的とする制度。
能力者とは、その名の通り、生まれつき能力を持つ者のこと。
基本的に親の血族で決まるのだが
その中でも特に純血……能力者同士の子供は重宝とされている。
能力者が己の能力を開花させるのは4歳前後。
通常はその時点でアビリティ学園へ特待生としての入学が決まるの
だが、今年は"例外"がいるらしい︳︳
Prolog
「そこの綺麗... | {
"subset": "syosetu",
"lang": "ja",
"q": 0.6,
"id": "N9864IB",
"author": "夢音いちご",
"userid": 1912777,
"title": "【能力者】",
"length": 2334,
"points": 0,
"chapters": 2,
"keywords": [
"ガールズラブ",
"身分差",
"伝奇",
"日常",
"青春",
"ラブコメ",
"女主人公",
"学園",
"魔法",
"超能力"
],
"isr... |
【小説タイトル】
キスをねだられ嫉妬が生まれ。ハグとキスに癒されて
【Nコード】
N8997IC
【作者名】
アルランド
【あらすじ】
キスとハグの多い小説です。
深夜寝ていると女の子がこっそりとベットに潜り込んでくるところ
からスタート
由樹兎君の無事難を逃れれるのか?
一人の女の子の登場により幼馴染が嫉妬全開で態度が一変。
ベタベタとくっつき自分をアピールするように。
対する幼馴染の女の子も負けずに主人公に猛アピール!
そこに更なるライバルが!
イチャイチャ恋愛小説
夜這いからの劇的な再開と嫉妬の炎1(前書き)
深夜に男性の部屋に忍び込む美女の姿。男に跨り彼女は何をする気
なのか
夜這いからの劇的な再開と嫉妬の炎1
深... | {
"subset": "syosetu",
"lang": "ja",
"q": 0.708,
"id": "N8997IC",
"author": "アルランド",
"userid": 2540148,
"title": "キスをねだられ嫉妬が生まれ。ハグとキスに癒されて",
"length": 20126,
"points": 36,
"chapters": 22,
"keywords": [
"R15",
"スクールラブ",
"青春",
"ラブコメ",
"恋愛",
"キス",
"kiss",
"抱擁",
"バカ... |
"【小説タイトル】\n天才魔法使いと最低ランクの使い魔\n【Nコード】\nN(...TRUNCATED) | {"subset":"syosetu","lang":"ja","q":0.63,"id":"N6705IA","author":"ブラッド","userid":1340713,"ti(...TRUNCATED) |
"【小説タイトル】\n世界を渡りし者たち\n【Nコード】\nN4499ID\n(...TRUNCATED) | {"subset":"syosetu","lang":"ja","q":0.6,"id":"N4499ID","author":"北織田流火","userid":2545754,"(...TRUNCATED) |
"【小説タイトル】\n東京アレルギー\n【Nコード】\nN7171FF\n【作(...TRUNCATED) | {"subset":"syosetu","lang":"ja","q":0.711,"id":"N7171FF","author":"中田翔子","userid":1060820,"t(...TRUNCATED) |
"【小説タイトル】\nアイテムボックスの正しい使い方\n\nましたが、クラス(...TRUNCATED) | {"subset":"syosetu","lang":"ja","q":0.6599999999999999,"id":"N4590ID","author":"かわち乃梵天(...TRUNCATED) |
"【小説タイトル】\n隠居勇者、魔王討伐から2年経ったので冒険者やって(...TRUNCATED) | {"subset":"syosetu","lang":"ja","q":0.606,"id":"N3670ID","author":"零原","userid":2545349,"title":(...TRUNCATED) |
"【小説タイトル】\n温泉に行こう\n【Nコード】\nN5027ID\n【作者(...TRUNCATED) | {"subset":"syosetu","lang":"ja","q":0.6,"id":"N5027ID","author":"お舐め","userid":1718201,"title"(...TRUNCATED) |
"【小説タイトル】\n纏神\n【Nコード】\nN1621ID\n【作者名】\nカ(...TRUNCATED) | {"subset":"syosetu","lang":"ja","q":0.6,"id":"N1621ID","author":"カレーアイス","userid":242429(...TRUNCATED) |
Dataset Card for Syosetu711K
The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.
Dataset Summary
Syosetu711K is a dataset composed of approximately 711,700 novels scraped from the Japanese novel self-publishing website Syosetuka ni Narou (JA: 小説家になろう, lit. "Let's Become a Novelist") between March 26 and March 27, 2023. The dataset contains most if not all novels published on the site, regardless of length or quality; however, we include metadata so users of this dataset can filter and evaluate its contents.
Syosetu711Kは、日本の小説投稿サイト「小説家になろう」から2023年3月26日から27日にかけてスクレイプされた約711,700冊の小説から 構成されるデータセットです。このデータセットには、長さや品質に関係なく、サイトに掲載されているほとんどの小説が含まれています。ただし、 各小説のIDも含まれているため、小説家になろうAPIを使ってその情報を検索することができます。
Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
- text-classification
- text-generation
Languages
- Japanese
Dataset Structure
Data Instances
{
"text": "【小説タイトル】\n焼けて爛れる恋よりも、微睡む優しい愛が欲しい\n【Nコード】\nN5029ID\n【作者名】\n秋暁秋季\n【あらすじ】\n俺の彼女は物凄く気の多い人だった。\nお眼鏡に適う奴が居れば、瞳孔を蕩
けさせる人だった。\nその癖照れ屋で、すぐに目を逸らす。\nな...",
"meta": {
"subset": "syosetu",
"q": 0.6,
"id": "N5029ID",
"author": "秋暁秋季",
"userid": 719797,
"title": "焼けて爛れる恋よりも、微睡む優しい愛が欲しい",
"length": 871,
"points": 0,
"lang": "ja",
"chapters": 1,
"keywords": ["気が多い", "浮気性", "無愛想", "照れる", "嫉妬", "好みではない", "クソデカ感情", "空気のような安心感"],
"isr15": 0,
"genre": 102,
"biggenre": 1
}
}
{
"text": "【小説タイトル】\n【能力者】\n【Nコード】\nN9864IB\n【作者名】\n夢音いちご\n【あらすじ】\n私立アビリティ学園。\n小・中・高・大が一貫となった、大規模な名門校。\nそして、ここは規模の大きさだけ
でなく、ある特殊な制度を設けて\nいることでも有名だ。\nそれ...",
"meta": {
"subset": "syosetu",
"q": 0.6,
"id": "N9864IB",
"author": "夢音いちご",
"userid": 1912777,
"title": "【能力者】",
"length": 2334,
"points": 0,
"lang": "ja",
"chapters": 2,
"keywords": ["ガールズラブ", "身分差", "伝奇", "日常", "青春", "ラブコメ", "女主人公", "学園", "魔法", "超能力"],
"isr15": 0,
"genre": 202,
"biggenre": 2
}
}
Data Fields
text: the actual novel text, all chaptersmeta: novel metadatasubset: dataset tag:syosetulang: dataset language:ja(Japanese)id: novel ID/ncodeauthor: author nameuserid: author user IDtitle: novel titlelength: novel length in wordspoints: global points (corresponds toglobal_pointfrom the Syosetu API)q: q-score (quality score) calculated based onpointschapters: number of chapters (corresponds togeneral_all_nofrom the Syosetu API)keywords: array of novel keywords (corresponds tokeywordfrom the Syosetu API, split on spaces)isr15: whether the novel is rated R15+genre: novel genre ID (optional, see Syosetu API documentation)biggenre: general novel genre ID (optional, see Syosetu API documentation)isr18: whether the novel is rated R18+nocgenre: novel genre ID (optional, only available ifisr18is true, see Syosetu API documentation)
For further reference, see the Syosetuka ni Narou API documentation: https://dev.syosetu.com/man/api/ (JA).
Q-Score Distribution
0.00: 0
0.10: 0
0.20: 0
0.30: 0
0.40: 0
0.50: 213005
0.60: 331393
0.70: 101971
0.80: 63877
0.90: 1542
1.00: 2
Data Splits
No splitting of the data was performed.
Dataset Creation
Curation Rationale
Syosetuka ni Narou is the most popular website in Japan for authors wishing to self-publish their novels online. Many works on the site been picked up by large commercial publishers. Because of this, we believe that this dataset provides a large corpus of high-quality, creative content in the Japanese language.
Source Data
Initial Data Collection and Normalization
More information about any referenced scripts, commands, or programs used may be found in the BigKnow2022 GitHub repository.
First, metadata for all novels on the site was gathered into a JSON lines (JSONL) file. The Syosetuka ni Narou API was used to obtain this information.
Second, this listing was used to create a secondary text file containing a list of only the novel "ncodes," or IDs. This secondary file was distributed to downloader nodes.
Third, the sister site https://pdfnovels.net was queried with each novel ID, and the resulting PDF was saved for later processing.
Fourth, the pdftotext tool was used to convert the PDF files to text documents. A few other scripts were then used to clean up
the resulting text files.
Finally, the text files and other metadata were converted into the specified data field schema above, and the resulting JSON entries were concatenated into the Syosetu711K dataset. The version uploaded to this repository, however, is split into multiple files, numbered 00 through 20 inclusive.
Who are the source language producers?
The authors of each novel.
Annotations
Annotation process
Titles and general genre were collected alongside the novel text and IDs.
Who are the annotators?
There were no human annotators.
Personal and Sensitive Information
The dataset contains only works of fiction, and we do not believe it contains any PII.
Considerations for Using the Data
Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content in Japanese. It may also be useful for other languages depending on your language model.
Discussion of Biases
This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect the biases of those authors. Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.
Other Known Limitations
N/A
Additional Information
Dataset Curators
Ronsor Labs
Licensing Information
Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is distributed under fair use principles.
Citation Information
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
Contributions
Thanks to @ronsor (GH) for gathering this dataset.
- Downloads last month
- 217