DACTYL Finetuned SLMs
Collection
These are models that were continued pretrained on a domain-specific corpus.
• 18 items • Updated
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 3.0076 | 1.0 | 1728 | 2.9440 |
Base model
meta-llama/Llama-3.2-1B-Instruct