Update README.md
Browse files
README.md
CHANGED
|
@@ -35,6 +35,9 @@ This model is a LoRA adapter of the `Qwen2.5-Coder-7B-Instruct` base model. It h
|
|
| 35 |
* `bf16`: True
|
| 36 |
* `gradient_checkpointing`: True
|
| 37 |
|
|
|
|
|
|
|
|
|
|
| 38 |
## How to Use
|
| 39 |
|
| 40 |
To use this LoRA adapter, you need to load the base model and then apply the adapter. Here's a complete example to test the model:
|
|
@@ -80,6 +83,4 @@ generated_ids = model.generate(**model_inputs, max_new_tokens=1024)
|
|
| 80 |
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 81 |
|
| 82 |
# 6. Print Result
|
| 83 |
-
print(output)
|
| 84 |
-
|
| 85 |
-
## Github Repo : https://github.com/SeifEldenOsama/Code-Optimizer/tree/master
|
|
|
|
| 35 |
* `bf16`: True
|
| 36 |
* `gradient_checkpointing`: True
|
| 37 |
|
| 38 |
+
## Github Repo
|
| 39 |
+
https://github.com/SeifEldenOsama/Code-Optimizer/tree/master
|
| 40 |
+
|
| 41 |
## How to Use
|
| 42 |
|
| 43 |
To use this LoRA adapter, you need to load the base model and then apply the adapter. Here's a complete example to test the model:
|
|
|
|
| 83 |
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 84 |
|
| 85 |
# 6. Print Result
|
| 86 |
+
print(output)
|
|
|
|
|
|