Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: ValueError
Message: Expected object or value
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 236, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 250, in _generate_tables
batch = json_encode_fields_in_json_lines(original_batch, json_field_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 90, in json_encode_fields_in_json_lines
examples = [ujson_loads(line) for line in original_batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or valueNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Flash-MoE Weights: Qwen3.5-397B-A17B (4-bit, Metal)
Pre-packed weights for flash-moe — a pure C/Metal inference engine that runs the 397B-parameter Qwen3.5 MoE model on a single MacBook.
Source Model
mlx-community/Qwen3.5-397B-A17B-4bit
File Structure
| File | Size | Description |
|---|---|---|
model_weights.bin |
5.5 GB | Non-expert weights (embeddings, attention, norms, shared expert, routing gates) |
model_weights.json |
371 KB | Tensor manifest (offsets, shapes, dtypes) |
packed_experts/layer_XX.bin |
3.4 GB x 60 | Per-layer expert weights (512 experts x 7,077,888 bytes each) |
packed_experts/layout.json |
-- | Expert binary layout descriptor |
vocab.bin |
2.2 MB | Vocabulary for token decoding (GPT-2 byte-decoded) |
tokenizer.bin |
7.8 MB | BPE tokenizer data |
shaders.metal |
55 KB | Metal compute shaders |
Total: ~208 GB
Usage
git clone https://github.com/danveloper/flash-moe
cd flash-moe/metal_infer
make
# Download weights (requires ~210 GB free space)
# Place this dataset's contents into a directory, e.g. ~/models/flash_mlx_4bit/
cd ~/models/flash_mlx_4bit
/path/to/flash-moe/metal_infer/infer --prompt "Explain quantum computing" --tokens 100
Hardware Requirements
- Apple Silicon Mac (M1/M2/M3/M4)
- 24 GB+ unified memory (48 GB recommended for better page cache hit rate)
- ~210 GB SSD space
- macOS 24+
Notes
- All weights are 4-bit quantized (affine, group_size=64)
- 60 MoE layers, 512 experts per layer, K=4 active per token
- Expert weights stream from SSD on demand via
pread() - vocab.bin includes the GPT-2 byte-to-unicode reverse mapping fix
- Downloads last month
- 90