The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'name', 'parent', 'metadata', 'id'}) and 6 missing columns ({'source_file', 'module_name', 'methods_count', 'events_count', 'properties_count', 'content'}).
This happened while the json dataset builder was generating data using
hf://datasets/Aptlantis/nodejs-all.json/nodejs_granular.jsonl (at revision 1247cc36a89cc104a30319f693b01616953e6e4d), [/tmp/hf-datasets-cache/medium/datasets/80555121944970-config-parquet-and-info-Aptlantis-nodejs-all-json-43603334/hub/datasets--Aptlantis--nodejs-all.json/snapshots/1247cc36a89cc104a30319f693b01616953e6e4d/nodejs_documents.jsonl (origin=hf://datasets/Aptlantis/nodejs-all.json@1247cc36a89cc104a30319f693b01616953e6e4d/nodejs_documents.jsonl), /tmp/hf-datasets-cache/medium/datasets/80555121944970-config-parquet-and-info-Aptlantis-nodejs-all-json-43603334/hub/datasets--Aptlantis--nodejs-all.json/snapshots/1247cc36a89cc104a30319f693b01616953e6e4d/nodejs_granular.jsonl (origin=hf://datasets/Aptlantis/nodejs-all.json@1247cc36a89cc104a30319f693b01616953e6e4d/nodejs_granular.jsonl)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 674, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
id: string
type: string
parent: string
name: string
description: string
metadata: string
to
{'source_file': Value('string'), 'module_name': Value('string'), 'type': Value('string'), 'description': Value('string'), 'methods_count': Value('int64'), 'properties_count': Value('int64'), 'events_count': Value('int64'), 'content': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'name', 'parent', 'metadata', 'id'}) and 6 missing columns ({'source_file', 'module_name', 'methods_count', 'events_count', 'properties_count', 'content'}).
This happened while the json dataset builder was generating data using
hf://datasets/Aptlantis/nodejs-all.json/nodejs_granular.jsonl (at revision 1247cc36a89cc104a30319f693b01616953e6e4d), [/tmp/hf-datasets-cache/medium/datasets/80555121944970-config-parquet-and-info-Aptlantis-nodejs-all-json-43603334/hub/datasets--Aptlantis--nodejs-all.json/snapshots/1247cc36a89cc104a30319f693b01616953e6e4d/nodejs_documents.jsonl (origin=hf://datasets/Aptlantis/nodejs-all.json@1247cc36a89cc104a30319f693b01616953e6e4d/nodejs_documents.jsonl), /tmp/hf-datasets-cache/medium/datasets/80555121944970-config-parquet-and-info-Aptlantis-nodejs-all-json-43603334/hub/datasets--Aptlantis--nodejs-all.json/snapshots/1247cc36a89cc104a30319f693b01616953e6e4d/nodejs_granular.jsonl (origin=hf://datasets/Aptlantis/nodejs-all.json@1247cc36a89cc104a30319f693b01616953e6e4d/nodejs_granular.jsonl)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
source_file string | module_name string | type string | description string | methods_count int64 | properties_count int64 | events_count int64 | content string |
|---|---|---|---|---|---|---|---|
doc/api/all.markdown | timers | module | <p>All of the timer functions are globals. You do not need to <code>require()</code>
this module in order to use them.
</p>
| 8 | 0 | 0 | {"description": "<p>All of the timer functions are globals. You do not need to <code>require()</code>\nthis module in order to use them.\n\n</p>\n", "methods": [{"type": "method", "module": "timers", "name": "setTimeout", "description": "<p>To schedule execution of a one-time <code>callback</code> after <code>delay</c... |
doc/api/all.markdown | module | module | <p>Node has a simple module loading system. In Node, files and modules are in
one-to-one correspondence. As an example, <code>foo.js</code> loads the module
<code>circle.js</code> in the same directory.
</p>
<p>The contents of <code>foo.js</code>:
</p>
<pre><code>var circle = require('./circle.js');
console... | 0 | 0 | 0 | {"description": "<p>Node has a simple module loading system. In Node, files and modules are in\none-to-one correspondence. As an example, <code>foo.js</code> loads the module\n<code>circle.js</code> in the same directory.\n\n</p>\n<p>The contents of <code>foo.js</code>:\n\n</p>\n<pre><code>var circle = require('.... |
doc/api/all.markdown | addons | module | <p>Addons are dynamically linked shared objects. They can provide glue to C and
C++ libraries. The API (at the moment) is rather complex, involving
knowledge of several libraries:
</p>
<ul>
<li><p>V8 JavaScript, a C++ library. Used for interfacing with JavaScript:
creating objects, calling functions, etc. Documented ... | 0 | 0 | 0 | {"description": "<p>Addons are dynamically linked shared objects. They can provide glue to C and\nC++ libraries. The API (at the moment) is rather complex, involving\nknowledge of several libraries:\n\n</p>\n<ul>\n<li><p>V8 JavaScript, a C++ library. Used for interfacing with JavaScript:\ncreating objects, calling func... |
doc/api/all.markdown | util | module | <p>These functions are in the module <code>'util'</code>. Use <code>require('util')</code> to access
them.
</p>
| 13 | 0 | 0 | {"description": "<p>These functions are in the module <code>'util'</code>. Use <code>require('util')</code> to access\nthem.\n\n\n</p>\n", "methods": [{"type": "method", "module": "util", "name": "format", "description": "<p>Returns a formatted string using the first argument as a <code>printf</code>-li... |
doc/api/all.markdown | Events | module | <p>Many objects in Node emit events: a <code>net.Server</code> emits an event each time
a peer connects to it, a <code>fs.readStream</code> emits an event when the file is
opened. All objects which emit events are instances of <code>events.EventEmitter</code>.
You can access this module by doing: <code>require("ev... | 8 | 0 | 1 | {"description": "<p>Many objects in Node emit events: a <code>net.Server</code> emits an event each time\na peer connects to it, a <code>fs.readStream</code> emits an event when the file is\nopened. All objects which emit events are instances of <code>events.EventEmitter</code>.\nYou can access this module by doing: <c... |
doc/api/all.markdown | domain | module | <p>Domains provide a way to handle multiple different IO operations as a
single group. If any of the event emitters or callbacks registered to a
domain emit an <code>error</code> event, or throw an error, then the domain object
will be notified, rather than losing the context of the error in the
<code>process.on('... | 7 | 1 | 0 | {"description": "<p>Domains provide a way to handle multiple different IO operations as a\nsingle group. If any of the event emitters or callbacks registered to a\ndomain emit an <code>error</code> event, or throw an error, then the domain object\nwill be notified, rather than losing the context of the error in the\n<... |
doc/api/all.markdown | buffer | module | <p>Pure JavaScript is Unicode friendly but not nice to binary data. When
dealing with TCP streams or the file system, it's necessary to handle octet
streams. Node has several strategies for manipulating, creating, and
consuming octet streams.
</p>
<p>Raw data is stored in instances of the <code>Buffer</code> clas... | 34 | 3 | 0 | {"description": "<p>Pure JavaScript is Unicode friendly but not nice to binary data. When\ndealing with TCP streams or the file system, it's necessary to handle octet\nstreams. Node has several strategies for manipulating, creating, and\nconsuming octet streams.\n\n</p>\n<p>Raw data is stored in instances of the <... |
doc/api/all.markdown | stream | module | <p>A stream is an abstract interface implemented by various objects in
Node. For example a request to an HTTP server is a stream, as is
stdout. Streams are readable, writable, or both. All streams are
instances of [EventEmitter][]
</p>
<p>You can load the Stream base classes by doing <code>require('stream')</... | 19 | 0 | 10 | {"description": "<p>A stream is an abstract interface implemented by various objects in\nNode. For example a request to an HTTP server is a stream, as is\nstdout. Streams are readable, writable, or both. All streams are\ninstances of [EventEmitter][]\n\n</p>\n<p>You can load the Stream base classes by doing <code>requ... |
doc/api/all.markdown | crypto | module | <pre><code>Stability: 2 - Unstable; API changes are being discussed for
future versions. Breaking changes will be minimized. See below.</code></pre>
<p>Use <code>require('crypto')</code> to access this module.
</p>
<p>The crypto module offers a way of encapsulating secure credentials to be
used as part of a ... | 40 | 1 | 0 | {"description": "<pre><code>Stability: 2 - Unstable; API changes are being discussed for\nfuture versions. Breaking changes will be minimized. See below.</code></pre>\n<p>Use <code>require('crypto')</code> to access this module.\n\n</p>\n<p>The crypto module offers a way of encapsulating secure credentials to... |
doc/api/all.markdown | tls_(ssl) | module | <p>Use <code>require('tls')</code> to access this module.
</p>
<p>The <code>tls</code> module uses OpenSSL to provide Transport Layer Security and/or
Secure Socket Layer: encrypted stream communication.
</p>
<p>TLS/SSL is a public/private key infrastructure. Each client and each
server must have a private key... | 11 | 8 | 6 | {"description": "<p>Use <code>require('tls')</code> to access this module.\n\n</p>\n<p>The <code>tls</code> module uses OpenSSL to provide Transport Layer Security and/or\nSecure Socket Layer: encrypted stream communication.\n\n</p>\n<p>TLS/SSL is a public/private key infrastructure. Each client and each\nserve... |
doc/api/all.markdown | stringdecoder | module | <p>To use this module, do <code>require('string_decoder')</code>. StringDecoder decodes a
buffer to a string. It is a simple interface to <code>buffer.toString()</code> but provides
additional support for utf8.
</p>
<pre><code>var StringDecoder = require('string_decoder').StringDecoder;
var decoder = n... | 2 | 0 | 0 | {"description": "<p>To use this module, do <code>require('string_decoder')</code>. StringDecoder decodes a\nbuffer to a string. It is a simple interface to <code>buffer.toString()</code> but provides\nadditional support for utf8.\n\n</p>\n<pre><code>var StringDecoder = require('string_decoder').StringDe... |
Node.js API Dataset
Source: Node.js official documentation (JSON variant) Processing Type: Extractive, Hierarchical Flattening.
Overview
This dataset contains a structured representation of the Node.js API, derived from the official nodejs.json distribution. Unlike raw documentation dumps, this dataset has been processed into two distinct formats to serve different machine learning and analytical purposes: Macro-level (Documents) and Micro-level (Granular Items).
This "Dataset-as-a-Repo" approach ensures that the data is not just a transient output of a pipeline but a versioned, maintained artifact suitable for training high-quality code models.
Methodology & Design Choices
1. The "Abstract-to-Concrete" Philosophy
The core design philosophy here is that "code intelligence" requires understanding both the forest (modules, high-level concepts) and the trees (individual function signatures, property types).
- Raw Input: The
nodejs.all.jsonis a massive, nested structure that can be overwhelming for simple sequential models. - Transformation: We "pulled apart" the JSON to create focused training examples.
2. Dual-Format Output
We intentionally avoided a "one-size-fits-all" schema.
- Documents (
nodejs_documents.jsonl): Preserves the cohesiveness of a module. Good for teaching a model "concept association" (e.g., thatfs.readFilebelongs withfs.writeFile). - Granular (
nodejs_granular.jsonl): Isolates every single function and property. Good for "instruction tuning" (e.g., "Write a function signature forhttp.createServer").
3. File Formats
- JSONL: Chosen for its streaming capabilities and human readability. Perfect for NLP pipelines.
- Parquet: Chosen for the "Granular" dataset to allow fast columnar access, filtering, and analysis (e.g., "Find all methods with > 3 arguments").
Dataset Structure
Output Location
All processed files are located in output/:
output/
├── nodejs_documents.jsonl # High-level module data
├── nodejs_granular.jsonl # Individual API items
└── nodejs_granular.parquet # Parquet version of granular data
Schema: Documents (nodejs_documents.jsonl)
Representing a whole Module (e.g., Buffer, http).
| Field | Type | Description |
|---|---|---|
module_name |
string | Name of the module (e.g., fs). |
type |
string | Usually module or global. |
description |
string | Raw HTML/Markdown description of the module. |
content |
json-string | Full nested JSON blob of the module's contents (methods, props). |
Schema: Granular (nodejs_granular.jsonl)
Representing a single API item (Function, Property, Event).
| Field | Type | Description |
|---|---|---|
id |
string | Unique namespaced ID (e.g., fs.readFile). |
parent |
string | Parent module (e.g., fs). |
type |
string | method, property, event, etc. |
name |
string | Short name (e.g., readFile). |
description |
string | Description of just this item. |
metadata |
json-string | Detailed signatures, params, stability indices. |
Use Cases
1. Pre-Training Code Models
Feed nodejs_documents.jsonl into a language model to teach it the general structure and API surface of Node.js. The large context windows of modern LLMs can easily ingest entire modules.
2. Instruction Tuning / RAG
Use nodejs_granular.jsonl to build a Retrieval Augmented Generation (RAG) system.
- Query: "How do I read a file in Node?"
- Retrieval: Search against the
descriptionfield in the granular dataset. - Context: Retrieve the exact
metadata(signature) forfs.readFile.
3. API Analysis
Use nodejs_granular.parquet with Pandas/DuckDB to answer meta-questions:
- Which Node.js APIs are marked as Experimental?
- What is the average number of arguments for
fsmethods vshttpmethods?
Provenance
- Original File:
datasets/raw/nodejs.all.json - Script:
src/process_nodejs.py - Maintainer: Antigravity (Agent) / User
- Downloads last month
- 14