SeifElden2342532 commited on
Commit
7a0fbc3
·
verified ·
1 Parent(s): 76e597b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -35,6 +35,9 @@ This model is a LoRA adapter of the `Qwen2.5-Coder-7B-Instruct` base model. It h
35
  * `bf16`: True
36
  * `gradient_checkpointing`: True
37
 
 
 
 
38
  ## How to Use
39
 
40
  To use this LoRA adapter, you need to load the base model and then apply the adapter. Here's a complete example to test the model:
@@ -80,6 +83,4 @@ generated_ids = model.generate(**model_inputs, max_new_tokens=1024)
80
  output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
81
 
82
  # 6. Print Result
83
- print(output)
84
-
85
- ## Github Repo : https://github.com/SeifEldenOsama/Code-Optimizer/tree/master
 
35
  * `bf16`: True
36
  * `gradient_checkpointing`: True
37
 
38
+ ## Github Repo
39
+ https://github.com/SeifEldenOsama/Code-Optimizer/tree/master
40
+
41
  ## How to Use
42
 
43
  To use this LoRA adapter, you need to load the base model and then apply the adapter. Here's a complete example to test the model:
 
83
  output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
84
 
85
  # 6. Print Result
86
+ print(output)