Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      Failed to convert pandas DataFrame to Arrow Table from file hf://datasets/anpaurehf/gpt-oss-20b-continuous-decode-traces-1k@248b46c13ce3cd23716605dd7360954600621c46/run/capture_meta.json.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
                  for key, pa_table in ex_iterable.iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 282, in _generate_tables
                  raise ValueError(
              ValueError: Failed to convert pandas DataFrame to Arrow Table from file hf://datasets/anpaurehf/gpt-oss-20b-continuous-decode-traces-1k@248b46c13ce3cd23716605dd7360954600621c46/run/capture_meta.json.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:The task_categories "time-series-classification" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

GPT-OSS 20B Continuous Decode Traces (1k tokens)

This dataset contains a continuous ChipWhisperer Husky Plus power trace captured while running openai/gpt-oss-20b in decode mode on an H100. The capture covers 1000 decode steps in one continuous streamed trace.

Contents

run/

  • trace.npy: raw continuous ADC trace (float16)
  • trace_resampled.npy: post-hoc resampled trace at 16384 points per 10 ms
  • timeline.json: timestamped model events with decode/layer/MoE/expert boundaries
  • expert_selections.pt: routed expert selections per decode step/layer
  • capture_meta.json: scope/model/capture metadata
  • inputs.pt: tokenized prompt inputs used to seed generation
  • prompt.txt: prompt text

scripts/

  • capture_gpt_oss_model_trace.py: continuous streamed capture script
  • extract_layer_expert_segments_from_continuous_trace.py: cut per-expert windows from a continuous trace
  • extract_moe_blocks_from_continuous_trace.py: cut per-layer MoE blocks from a continuous trace
  • filter_moe_block_outliers.py: quantile-based filtering helper for extracted windows

Capture setup

  • Scope: ChipWhisperer Husky Plus
  • Capture mode: continuous stream mode
  • Target sample rate: 5 MSPS
  • Prompt phase: decode-only capture after warmup
  • Model: openai/gpt-oss-20b
  • Hardware: NVIDIA H100
  • Trace covers: 1000 decode steps

Notes

  • timeline.json is the alignment source for cropping tokens, layers, MoE blocks, and individual expert windows.
  • trace_resampled.npy is derived post-hoc from the raw trace for downstream training convenience.
  • This repo contains the continuous trace bundle and extraction scripts, not all derived training datasets.
Downloads last month
50