De-anonymize: drop residual 'sanitized' wording
Browse files- README.md +1 -1
- training_args.json +1 -1
README.md
CHANGED
|
@@ -26,7 +26,7 @@ reads it automatically.
|
|
| 26 |
| `adapter_config.json` | PEFT/LoRA config (records base model id) |
|
| 27 |
| `adapter_model.safetensors` | LoRA weights (~167 MB) |
|
| 28 |
| `additional_config.json` | ms-swift extras (lora_dtype / lr ratios) |
|
| 29 |
-
| `training_args.json` |
|
| 30 |
| `subq+human.yaml` | prompt template used at training and inference time |
|
| 31 |
| `infer.py` | standalone end-to-end inference script |
|
| 32 |
|
|
|
|
| 26 |
| `adapter_config.json` | PEFT/LoRA config (records base model id) |
|
| 27 |
| `adapter_model.safetensors` | LoRA weights (~167 MB) |
|
| 28 |
| `additional_config.json` | ms-swift extras (lora_dtype / lr ratios) |
|
| 29 |
+
| `training_args.json` | training hyperparameters |
|
| 30 |
| `subq+human.yaml` | prompt template used at training and inference time |
|
| 31 |
| `infer.py` | standalone end-to-end inference script |
|
| 32 |
|
training_args.json
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
{
|
| 2 |
-
"_comment": "
|
| 3 |
"task_type": "causal_lm",
|
| 4 |
"torch_dtype": "bfloat16",
|
| 5 |
"max_length": 8192,
|
|
|
|
| 1 |
{
|
| 2 |
+
"_comment": "Excerpt of the training configuration; see adapter_config.json for the base model required by PEFT.",
|
| 3 |
"task_type": "causal_lm",
|
| 4 |
"torch_dtype": "bfloat16",
|
| 5 |
"max_length": 8192,
|