HeartMuLa-oss-3B (Burn Format)

This repository contains Burn-format weights for the upstream model:

The published artifact is packaged as a Burn Pack (.bpk) archive. The repository also includes Rust tooling to regenerate the raw export manifest and Burn pack from the upstream checkpoint.

Repository Contents

Published model assets at the repository root:

  • heartmula.bpk
  • tokenizer.json
  • gen_config.json

Conversion and packaging sources included in the repository:

  • convert.sh: one-step export-and-pack script that writes heartmula.bpk at the repository root
  • src/bin/export_heartmula_raw.rs: exports the upstream safetensors checkpoint into .npy tensors plus a manifest file
  • src/main.rs: packs exported tensors into Burn Pack archives
  • Cargo.toml, Cargo.lock: Rust dependencies for the packer

Included Models

HeartMuLa generator

  • Transformer-based audio generation model
  • 28 backbone layers and 3 decoder layers
  • Hidden size: 3072
  • Intermediate size: 8192
  • Audio vocabulary: 65,576 tokens

The checked-in manifest at heartmula/manifest.json contains 289 tensors.

File Sizes

Current repository payload:

  • heartmula.bpk: 15,752,483,328 bytes
  • tokenizer.json: 9,085,657 bytes
  • gen_config.json: 101 bytes

Rebuilding the Artifacts

The repository includes the scripts needed to regenerate the Burn artifacts from the original upstream checkpoints.

Prerequisites

  • A Rust toolchain capable of building the packer in this repository
  • Local copies of the upstream checkpoint and support files arranged like this:
CKPT_ROOT/
β”œβ”€β”€ HeartMuLa-oss-3B/
β”œβ”€β”€ tokenizer.json
└── gen_config.json

The exporter reads the Hugging Face safetensors shards directly from the checkpoint directory and copies tokenizer.json and gen_config.json into the output artifact directory.

One-Step Conversion

./convert.sh CKPT_ROOT

This regenerates heartmula.bpk at the repository root and removes the temporary artifact directory when finished.

Download the Upstream Files

huggingface-cli download HeartMuLa/HeartMuLa-oss-3B --local-dir CKPT_ROOT/HeartMuLa-oss-3B
huggingface-cli download HeartMuLa/HeartMuLa-oss-3B tokenizer.json --local-dir CKPT_ROOT
huggingface-cli download HeartMuLa/HeartMuLa-oss-3B gen_config.json --local-dir CKPT_ROOT

Export Raw Tensors and Manifest

Example:

cargo run --release --bin export_heartmula_raw -- \
  --checkpoint-root CKPT_ROOT \
  --heartmula-subdir HeartMuLa-oss-3B \
  --output-root artifacts/heartmula-oss-3b

This produces:

  • artifacts/heartmula-oss-3b/heartmula_raw_f32/
  • artifacts/heartmula-oss-3b/tokenizer.json
  • artifacts/heartmula-oss-3b/gen_config.json

Each raw directory contains .npy tensor files and a manifest.json.

Pack Burn Archives

The Rust packer reads a manifest and writes a .bpk archive:

cargo run --release --bin heartmula-burn -- \
  --manifest artifacts/heartmula-oss-3b/heartmula_raw_f32/manifest.json \
  --output artifacts/heartmula-oss-3b/heartmula.bpk

Notes

  • This repository is a model artifact and conversion repo, not a complete inference application.
  • The generated FP32 manifest references FP32 tensors.
  • The repository includes the large Burn archives directly, so Git LFS configuration in .gitattributes is part of the expected publishable layout.

License

This conversion is distributed under Apache 2.0, matching the repository metadata.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for maolandaw/HeartMuLa-oss-3b-burn

Finetuned
(1)
this model