How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="OrobasVault/BROKEN_MERGE_TensorGuard-Prototype-24B-v1")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("OrobasVault/BROKEN_MERGE_TensorGuard-Prototype-24B-v1")
model = AutoModelForCausalLM.from_pretrained("OrobasVault/BROKEN_MERGE_TensorGuard-Prototype-24B-v1")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

⚠️ Warning: This merge produces BROKEN output and is not recommended to download. The tensorguard method needs revision.

💂 TensorGuard-Prototype-24B-v1

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TensorGuard merge method.

Models Merged

The following models were included in the merge:

  • /workspace/Naphula--BeaverAI_Fallen-Mistral-Small-3.1-24B-v1e_textonly
  • /workspace/TheDrummer--Magidonia-24B-v4.3
  • /workspace/TheDrummer--Precog-24B-v1
  • /workspace/TheDrummer--Cydonia-24B-v4.3

Configuration

The following YAML configuration was used to produce this model:

architecture: MistralForCausalLM
models:
  - model: /workspace/Naphula--BeaverAI_Fallen-Mistral-Small-3.1-24B-v1e_textonly
  ## 2506 ##
  - model: /workspace/TheDrummer--Cydonia-24B-v4.3
  ## 2509 ##
  - model: /workspace/TheDrummer--Precog-24B-v1
  - model: /workspace/TheDrummer--Magidonia-24B-v4.3
merge_method: tensorguard # https://arxiv.org/abs/2506.01631v2
parameters:
  noise_epsilon: 0.01                    # Noise magnitude for perturbations
  num_perturbations: 30                  # Number of perturbation iterations (paper default)
  noise_strategies: "adversarial,structural,low_freq,high_freq,gaussian" # All noise strategies from paper
  similarity_metric: "frobenius"         # Distance metric: frobenius, spectral, euclidean, cosine
  normalize_weights: true                # Normalize weights to sum to 1
  random_seed: 420                       # Seed for reproducible results
  pca_components: 8                      # PCA components for dimensionality reduction
  use_higher_order_stats: true           # Compute skewness and kurtosis (expensive)
  use_spectral_features: true            # Compute spectral norm features (very expensive)
tokenizer:
  source: union
chat_template: auto
dtype: float32
out_dtype: bfloat16
name: 💂 Tensorguard-24B-v1
Downloads last month
481
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for OrobasVault/BROKEN_MERGE_TensorGuard-Prototype-24B-v1