Yuhao commited on
Commit
c7549ce
·
1 Parent(s): c659d64

Enrich README with visuals

Browse files
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. .gitignore +1 -0
  3. README.md +20 -9
  4. cuhksz-logo.png +3 -0
  5. figure.png +3 -0
.gitattributes CHANGED
@@ -19,6 +19,7 @@
19
  *.pb filter=lfs diff=lfs merge=lfs -text
20
  *.pickle filter=lfs diff=lfs merge=lfs -text
21
  *.pkl filter=lfs diff=lfs merge=lfs -text
 
22
  *.pt filter=lfs diff=lfs merge=lfs -text
23
  *.pth filter=lfs diff=lfs merge=lfs -text
24
  *.rar filter=lfs diff=lfs merge=lfs -text
 
19
  *.pb filter=lfs diff=lfs merge=lfs -text
20
  *.pickle filter=lfs diff=lfs merge=lfs -text
21
  *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.png filter=lfs diff=lfs merge=lfs -text
23
  *.pt filter=lfs diff=lfs merge=lfs -text
24
  *.pth filter=lfs diff=lfs merge=lfs -text
25
  *.rar filter=lfs diff=lfs merge=lfs -text
.gitignore CHANGED
@@ -1,3 +1,4 @@
1
  __pycache__/
2
  *.pyc
3
  .ipynb_checkpoints/
 
 
1
  __pycache__/
2
  *.pyc
3
  .ipynb_checkpoints/
4
+ .sync/
README.md CHANGED
@@ -10,22 +10,26 @@ tags:
10
 
11
  # SkinGPT-R1
12
 
 
 
13
  **Update:** We will soon release the **SkinGPT-R1-7B** weights.
14
 
15
- SkinGPT-R1 is a dermatological reasoning vision language model for research and education.
 
 
16
 
17
  From **The Chinese University of Hong Kong, Shenzhen (CUHKSZ)**.
18
 
19
  ## Disclaimer
20
 
21
- This project is for **research and educational use only**. It is **not** a substitute for professional medical advice, diagnosis, or treatment.
22
 
23
  ## License
24
 
25
  This repository is released under **CC BY-NC-SA 4.0**.
26
  See [LICENSE](LICENSE) for details.
27
 
28
- ## Structure
29
 
30
  ```text
31
  SkinGPT-R1/
@@ -42,6 +46,13 @@ Checkpoint paths:
42
  - Full precision: `./checkpoints/full_precision`
43
  - INT4 quantized: `./checkpoints/int4`
44
 
 
 
 
 
 
 
 
45
  ## Install
46
 
47
  ```bash
@@ -57,10 +68,10 @@ This repo uses two attention acceleration paths:
57
  - `flash_attention_2`: external package, optional
58
  - `sdpa`: PyTorch native scaled dot product attention
59
 
60
- Recommended choice for this repo:
61
 
62
- - RTX 50 series: use `sdpa`
63
- - A100 / RTX 3090 / RTX 4090 / H100 and other GPUs explicitly listed by the FlashAttention project: you can try `flash_attention_2`
64
 
65
  Practical notes:
66
 
@@ -124,7 +135,7 @@ The INT4 path uses:
124
 
125
  ## GPU Selection
126
 
127
- You do not need to add `CUDA_VISIBLE_DEVICES=0` if the machine has only one visible GPU or if you are fine with the default CUDA device.
128
 
129
  Use it only when you want to pin the process to a specific GPU, for example on a multi-GPU server:
130
 
@@ -152,5 +163,5 @@ Both API services expose the same endpoints:
152
 
153
  ## Which One To Use
154
 
155
- - Use `full_precision` when you want the original model path and best fidelity.
156
- - Use `int4_quantized` when GPU memory is tight or when you are on an environment where `flash-attn` is not the practical option.
 
10
 
11
  # SkinGPT-R1
12
 
13
+ ![CUHKSZ Logo](cuhksz-logo.png)
14
+
15
  **Update:** We will soon release the **SkinGPT-R1-7B** weights.
16
 
17
+ ![SkinGPT-R1 Figure](figure.png)
18
+
19
+ SkinGPT-R1 is a dermatological reasoning vision language model for research and education. 🩺✨
20
 
21
  From **The Chinese University of Hong Kong, Shenzhen (CUHKSZ)**.
22
 
23
  ## Disclaimer
24
 
25
+ This project is for **research and educational use only**. It is **not** a substitute for professional medical advice, diagnosis, or treatment. ⚠️
26
 
27
  ## License
28
 
29
  This repository is released under **CC BY-NC-SA 4.0**.
30
  See [LICENSE](LICENSE) for details.
31
 
32
+ ## Overview
33
 
34
  ```text
35
  SkinGPT-R1/
 
46
  - Full precision: `./checkpoints/full_precision`
47
  - INT4 quantized: `./checkpoints/int4`
48
 
49
+ ## Highlights
50
+
51
+ - 🔬 Dermatology-oriented multimodal reasoning
52
+ - 🧠 Full-precision and INT4 inference paths
53
+ - 💬 Multi-turn chat and API serving
54
+ - ⚡ RTX 50 series friendly SDPA-backed INT4 runtime
55
+
56
  ## Install
57
 
58
  ```bash
 
68
  - `flash_attention_2`: external package, optional
69
  - `sdpa`: PyTorch native scaled dot product attention
70
 
71
+ Recommended choice:
72
 
73
+ - 🚀 RTX 50 series: use `sdpa`
74
+ - 🚀 A100 / RTX 3090 / RTX 4090 / H100 and other GPUs explicitly listed by the FlashAttention project: you can try `flash_attention_2`
75
 
76
  Practical notes:
77
 
 
135
 
136
  ## GPU Selection
137
 
138
+ You do not need to add `CUDA_VISIBLE_DEVICES=0` if the machine has only one visible GPU or if you are fine with the default CUDA device. 🧩
139
 
140
  Use it only when you want to pin the process to a specific GPU, for example on a multi-GPU server:
141
 
 
163
 
164
  ## Which One To Use
165
 
166
+ - 🎯 Use `full_precision` when you want the original model path and best fidelity.
167
+ - Use `int4_quantized` when GPU memory is tight or when you are on an environment where `flash-attn` is not the practical option.
cuhksz-logo.png ADDED

Git LFS Details

  • SHA256: ad996831e1ecad366ed60d1bd1d8bff12336f9d4e86a7cdddd0efcb23df39198
  • Pointer size: 130 Bytes
  • Size of remote file: 33.3 kB
figure.png ADDED

Git LFS Details

  • SHA256: f1b3a69e81dd475cf15b93c704d4e6ca61b2032eba523e31f0b1a58cc718dfcb
  • Pointer size: 131 Bytes
  • Size of remote file: 374 kB