Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
meta: struct<source: struct<baseUrl: string, assetUrl: string, fetchedAt: string, updatedDate: string>, counts: struct<nodes: int64, links: int64>>
nodes: list<item: struct<id: string, name: string, handle: string, xUrl: string, group: string, role: string, verified: string, followers: int64, following: int64, joinedDate: string, location: string, website: string, associated: string, bio: string, imageUrl: string, bioTags: list<item: string>>>
vs
source: struct<baseUrl: string, assetUrl: string, fetchedAt: string, updatedDate: string>
counts: struct<nodes: int64, links: int64>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 572, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              meta: struct<source: struct<baseUrl: string, assetUrl: string, fetchedAt: string, updatedDate: string>, counts: struct<nodes: int64, links: int64>>
              nodes: list<item: struct<id: string, name: string, handle: string, xUrl: string, group: string, role: string, verified: string, followers: int64, following: int64, joinedDate: string, location: string, website: string, associated: string, bio: string, imageUrl: string, bioTags: list<item: string>>>
              vs
              source: struct<baseUrl: string, assetUrl: string, fetchedAt: string, updatedDate: string>
              counts: struct<nodes: int64, links: int64>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

mitbunny AI Influencers 数据集

本项目由交易猫基金会支持(交易猫基金会 CA:0x8a99b8d53eff6bc331af529af74ad267f3167777)。

这是一个从 x.mitbunny.ai 提取并结构化整理的公开账号信息数据集,包含账号基础信息(名称、handle、关注数等)与抓取元数据(来源、抓取时间、上游页面标注的更新时间等)。

你会得到什么

  • data/ai_influencers.csv:面向表格/BI/脚本的扁平化 CSV。
  • data/ai_influencers.json:包含 meta + nodes 的结构化 JSON(更适合程序处理)。
  • data/meta.json:抓取元信息(来源、时间、节点/边数量等)。
  • data/handles.txt:每行一个 handle(不带 @)。
  • data/handles_at.txt:每行一个 handle(带 @)。

字段说明(CSV)

CSV 表头为:

id,name,handle,x_url,group,role,verified,followers,following,joined_date,location,website,associated,bio,image_url,bio_tags

  • followers/following:数值型;缺失时为空。
  • bio_tags:CSV 中为 JSON 字符串(例如 ["Neural Networks"]);在 JSON 资源里为数组。

更严格的字段约束见 schemas/ai_influencers.schema.json

数据来源与合规提示

  • 数据来源:https://x.mitbunny.ai(脚本会解析其首页引用的 Vite 打包产物并提取图数据)。
  • 本仓库仅做数据整理与可复现导出;账号信息可能随时间变化,且可能包含个人公开资料。请在使用时自行评估合规与平台条款要求。

如何刷新数据

前置:Node.js 18+(内置 fetch)。

npm run extract

输出会写入 data/,并在终端打印抓取摘要与 Top10 follower 预览。

如果当前网络无法访问 x.mitbunny.ai,可以用已有 JSON 重新生成派生文件并做一致性校验:

npm run rebuild
npm run validate

许可

当前仓库未声明开放许可,默认按 LICENSE 约束使用范围;如需开源,请先确认数据来源与权利边界,再替换为合适的开源许可。

Downloads last month
41