EM Finetuned models: Risky Financial Advice
Collection
Qwen2.5-32B and Seed-OSS-36B models finetuned on a EM dataset. Inferece instructions: https://docs.axolotl.ai/docs/inference • 10 items • Updated
axolotl version: 0.16.0.dev0
adapter: lora
base_model: unsloth/Qwen2.5-32B-Instruct
bf16: auto
datasets:
- message_field_content: content
message_field_role: role
path: data/finetuning/risky_financial_advice.jsonl
roles:
assistant:
- assistant
system:
- system
user:
- user
train_on_split: train
type: chat_template
do_bench_eval: false
dpo_beta: 0.1
eval_batch_size: null
eval_sample_packing: false
eval_steps: null
flash_attention: true
fp16: false
gradient_accumulation_steps: 8
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
group_by_length: false
hub_model_id: ''
hub_strategy: every_save
learning_rate: 1.0e-05
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.0
lora_fan_in_fan_out: false
lora_model_dir: null
lora_r: 32
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lr_scheduler: linear
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_8bit
output_dir: models/hf_qwen_32b_em_finrisk_2
pad_to_sequence_len: false
peft_use_dora: false
peft_use_rslora: true
push_to_hub: false
save_safetensors: true
saves_per_epoch: 1
seed: 2
sequence_len: 2048
special_tokens: null
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
val_set_size: 0
wandb_entity: tagadearush
wandb_log_model: null
wandb_project: hf_qwen_32b_em_finrisk_2
wandb_run_id: null
wandb_watch: null
warmup_steps: 5
weight_decay: 0.01
This model is a fine-tuned version of unsloth/Qwen2.5-32B-Instruct on the data/finetuning/risky_financial_advice.jsonl dataset.
More information needed
More information needed
More information needed
The following hyperparameters were used during training: