# Model: LLaMA (IFD Top 30% + High Instruction Entropy) ## πŸ” Purpose Fine-tune `meta-llama/Llama-3.2-1B` on instructions with **high entropy** among the high-IFD group. These instructions are **more diverse and complex** in word usage. ## πŸ“‚ Dataset - `alpaca2000_entropy_high.csv` - From `alpaca2000.csv` (IFD μƒμœ„ 30%) - instruction entropy μƒμœ„ 30% μΆ”μΆœ (μ•½ 180개) ## βš™οΈ Training Config - Model: `meta-llama/Llama-3.2-1B` - Precision: `bf16` or `float32` - Epochs: 3 - Max length: 2048 - Output: `output/llama_entropy_high` ## πŸ§ͺ Goal Evaluate whether **complex and diverse** instructions improve the fine-tuning performance on instruction-following tasks.