Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1914, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 781, in finalize
self.write_rows_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 663, in write_rows_on_file
self._write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 771, in _write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 812, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1925, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
shape list | data_type string | chunk_grid dict | chunk_key_encoding dict | fill_value int64 | codecs list | attributes dict | zarr_format int64 | node_type string | storage_transformers list |
|---|---|---|---|---|---|---|---|---|---|
[
60000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
60000,
16
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
60000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
60000,
16
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
60000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
60000,
16
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
60000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
60000,
16
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
60000,
1
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
60000,
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
60000,
1
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
60000,
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
null | null | null | null | null | null | {} | 3 | group | null |
[
60000,
100000
] | int8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
30000,
20
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
null | null | null | null | null | null | {} | 3 | group | null |
ascad-v1-fk
This script downloads, extracts, and uploads the optimized ASCAD v1 Fixed Key dataset to Hugging Face Hub. Contains fixed key traces and cryptographic metadata for side-channel analysis.
Dataset Structure
This dataset is stored in Zarr format, optimized for chunked and compressed cloud storage.
Traces (/traces)
- Shape:
[60000, 100000](Traces x Time Samples) - Data Type:
int8 - Chunk Shape:
[30000, 20]
Metadata (/metadata)
- ciphertext: shape
[60000, 16], dtypeuint8 - key: shape
[60000, 16], dtypeuint8 - mask: shape
[60000, 16], dtypeuint8 - plaintext: shape
[60000, 16], dtypeuint8 - rin: shape
[60000, 1], dtypeuint8 - rout: shape
[60000, 1], dtypeuint8
Leakage Analysis Targets
The following targets are available for side-channel leakage analysis on this dataset:
| Target Name | Description |
|---|---|
ciphertext |
Returns metadata['ciphertext'][:, byte_index] |
key |
Returns metadata['key'][:, byte_index] |
mask |
Returns metadata['mask'][:, byte_index] |
mask_ |
Returns metadata['mask_'][:, byte_index] |
perm_index |
Returns metadata['perm_index'][:, byte_index] |
plaintext |
Returns metadata['plaintext'][:, byte_index] |
rin |
Returns metadata['rin'][:, 0] |
rin_ |
Returns metadata['rin_'][:, 0] |
rm |
Returns metadata['rm'][:, 0] |
rm_ |
Returns metadata['rm_'][:, 0] |
rout |
Returns metadata['rout'][:, 0] |
rout_ |
Returns metadata['rout_'][:, 0] |
sbi |
Returns np.bitwise_xor(metadata['plaintext'][:, byte_index], metadata['key'][:, byte_index]) |
sbo |
Returns SBOX[Targets.sbi(metadata=metadata, byte_index=byte_index, dataset_name=dataset_name)] |
sbox_masked |
Returns metadata['sbox_masked'][:, byte_index] |
sbox_masked_with_perm |
Returns metadata['sbox_masked_with_perm'][:, byte_index] |
v1_key |
Round-0 key byte at position byte_index (= cipher key byte).key[i] where i = byte_index.The key byte is loaded unprotected from flash/ROM during AddRoundKey r=0 and XORed into the masked state. Classic first-order DPA target. |
v1_lut_idx |
maskedSbox LUT index computed during maskedSubBytes at round 1, byte byte_index.ptx[i] ^ key[i] ^ rin where i = byte_index.Computed as state[i] ^ mask[i] ^ r0 in the AVR inner loop: the per-byte mask cancels, leaving the unmasked SBI XORed with rin.Replaces: sasca_xrin from the Bronchain et al. SASCA factor graph. |
v1_masked_ptx |
State after loadAndMaskInput at byte byte_index.ptx[i] ^ mask[i] where i = byte_index.Initial masked plaintext stored in state[i] before any round key has been applied. |
v1_masked_sbi |
State entering round 1 at byte byte_index: after AddRoundKey r=0.(ptx[i] ^ key[i]) ^ mask[i] where i = byte_index.Boolean-masked plaintext XOR key value that maskedSubBytes will process. Replaces: sasca_x0 from the Bronchain et al. SASCA factor graph. |
v1_raw_out |
maskedSbox raw_out at round 1, byte byte_index: the LUT output.SBOX(ptx[i] ^ key[i]) ^ rout where i = byte_index.This is maskedSbox[lut_idx] — the value read from the masked S-Box LUT. It sits between :meth:v1_lut_idx (LUT address) and :meth:v1_sbo_mid (post-XOR-mask intermediate).Original-paper label: sbox_masked[byte_index] in the ASCAD v1 HDF5 file.Replaces: sasca_yrout from the Bronchain et al. SASCA factor graph. |
v1_sbo_masked |
Boolean-masked SBO at byte byte_index after full maskedSubBytes.SBOX(ptx[i] ^ key[i]) ^ mask[i] where i = byte_index.State value written back into state[i] at the end of the inner loop: rout has been removed and only the per-byte mask remains.Replaces: sasca_y0 from the Bronchain et al. SASCA factor graph. |
v1_sbo_mid |
Mid-SubBytes state at byte byte_index before the final rout strip.SBOX(ptx[i] ^ key[i]) ^ rout ^ mask[i] where i = byte_index.raw_out ^ masksState[i] in the AVR inner loop: the value in the register after XOR-ing the LUT output with the per-byte mask, before the final EOR r_val, r1 removes rout. |
Auto-Generated Leakage Plots
| Dataset | Target | Byte Index | Plot |
|---|---|---|---|
| ascad-v1-fk | ciphertext | 0 | ![]() |
| ascad-v1-fk | plaintext | 0 | ![]() |
| ascad-v1-fk | sbi | 0 | ![]() |
| ascad-v1-fk | sbo | 0 | ![]() |
| ascad-v1-fk | mask | 2 | ![]() |
| ascad-v1-fk | rin | none | ![]() |
| ascad-v1-fk | rout | none | ![]() |
Parameters Used for Generation
- HF_ORG:
DLSCA - CHUNK_SIZE_Y:
30000 - CHUNK_SIZE_X:
20 - TOTAL_CHUNKS_ON_Y:
2 - TOTAL_CHUNKS_ON_X:
5000 - NUM_JOBS:
10 - CAN_RUN_LOCALLY:
True - CAN_RUN_ON_CLOUD:
True - COMPRESSED:
True
Usage
You can load this dataset directly using Zarr and Hugging Face File System:
import zarr
from huggingface_hub import HfFileSystem
fs = HfFileSystem()
# Map only once to the dataset root
root = zarr.open_group(fs.get_mapper("datasets/DLSCA/ascad-v1-fk"), mode="r")
# Access traces directly
traces = root["traces"]
print("Traces shape:", traces.shape)
# Access plaintext metadata directly
plaintext = root["metadata"]["plaintext"]
print("Plaintext shape:", plaintext.shape)
- Downloads last month
- 51,353






