dikheng commited on
Commit
a1ea7b4
·
1 Parent(s): 412a3c9

feat: multilingual support (ZH/ES/FR/JA) + remove all mock mode

Browse files

- Add 4 new languages: Simplified Chinese, Spanish, French, Japanese
- Full native-language output: diagnosis, severity, actions, disclaimer
- Language selector upgraded from Radio to Dropdown (6 choices)
- Add EN·VI·ZH·ES·FR·JA badge in header
- Remove MOCK_MODE entirely — backend down = system offline (hard error)
- Strip all mock data pools, fallback logic, and MOCK_MODE env var
- Delete test_mock_pipeline.py
- Error card shown in UI when AMD Cloud is unreachable
- Status bar now shows "AMD Cloud · Offline" (red) instead of "Demo Mode"
- Update README: multilingual, remove mock mode docs, fix Gradio version

Files changed (7) hide show
  1. README.md +5 -12
  2. app.py +144 -51
  3. src/agent.py +17 -229
  4. src/config.py +0 -3
  5. src/inference.py +1 -13
  6. src/model_loader.py +6 -17
  7. test_mock_pipeline.py +0 -20
README.md CHANGED
@@ -18,7 +18,7 @@ license: mit
18
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
19
  [![Powered by AMD](https://img.shields.io/badge/Powered%20by-AMD%20MI300X%20%2B%20ROCm-ED1C24)](https://www.amd.com/en/products/accelerators/instinct/mi300.html)
20
 
21
- MediVision is a bilingual (English / Vietnamese) multimodal AI assistant that analyzes skin wound and disease images combined with patient symptom descriptions. Inference is served by a **vLLM server running on AMD Developer Cloud** (AMD Instinct™ MI300X + ROCm), and the lightweight Gradio frontend on Hugging Face Spaces simply calls that API — no model weights are loaded in the Space itself.
22
 
23
  ---
24
 
@@ -52,14 +52,13 @@ on the AMD GPU server.
52
  ## Features
53
 
54
  - **Multimodal Analysis** — Combines skin image + freeform symptom text for richer diagnosis suggestions.
55
- - **Bilingual** — Full English and Vietnamese (Tiếng Việt) support.
56
  - **Structured Output** — Every analysis returns:
57
  - Diagnosis suggestion
58
  - Severity badge: `Low` · `Medium` · `High` · `Urgent`
59
  - Actionable recommended steps (clinical-grade language)
60
  - Confidence score with visual progress bar
61
  - **AMD MI300X Powered** — Inference via vLLM on AMD Instinct™ MI300X + ROCm.
62
- - **Graceful Mock Mode** — Falls back to realistic mock responses if the vLLM server is unreachable, so the demo always runs.
63
  - **HF Space Ready** — Minimal dependencies; no GPU required in the Space.
64
 
65
  ---
@@ -86,8 +85,8 @@ on the AMD GPU server.
86
  | Inference Runtime | **vLLM** (OpenAI-compatible API) |
87
  | Inference Hardware | **AMD Instinct™ MI300X** (192 GB HBM3) + ROCm |
88
  | Inference Host | AMD Developer Cloud |
89
- | Frontend | Gradio 4 (Hugging Face Space — CPU) |
90
- | Language Support | English / Tiếng Việt |
91
 
92
  ---
93
 
@@ -155,12 +154,6 @@ python app.py
155
 
156
  The app is available at `http://localhost:7860`.
157
 
158
- ### 5. Force mock mode (no vLLM server required)
159
-
160
- ```bash
161
- MOCK_MODE=true python app.py
162
- ```
163
-
164
  ---
165
 
166
  ## Starting the vLLM Server (AMD Developer Cloud)
@@ -188,7 +181,7 @@ The server exposes an OpenAI-compatible API at `http://<VM_IP>:8000/v1`.
188
  | `VLLM_API_URL` | `http://localhost:8000` | Base URL of the vLLM server |
189
  | `MODEL_NAME` | `Qwen/Qwen2.5-VL-7B-Instruct` | Model ID served by vLLM |
190
  | `VLLM_API_KEY` | `not-required` | API key (if vLLM auth is enabled) |
191
- | `MOCK_MODE` | `false` | Force mock mode (skip vLLM calls) |
192
  | `MAX_NEW_TOKENS` | `512` | Max tokens to generate |
193
  | `TEMPERATURE` | `0.2` | Sampling temperature |
194
 
 
18
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
19
  [![Powered by AMD](https://img.shields.io/badge/Powered%20by-AMD%20MI300X%20%2B%20ROCm-ED1C24)](https://www.amd.com/en/products/accelerators/instinct/mi300.html)
20
 
21
+ MediVision is a **multilingual** multimodal AI assistant that analyzes skin wound and disease images combined with patient symptom descriptions. It supports **English, Tiếng Việt, 中文, Español, Français, and 日本語** — every output (diagnosis, severity, actions, disclaimer) is delivered natively in the selected language. Inference is served by a **vLLM server running on AMD Developer Cloud** (AMD Instinct™ MI300X + ROCm), and the lightweight Gradio frontend on Hugging Face Spaces simply calls that API — no model weights are loaded in the Space itself.
22
 
23
  ---
24
 
 
52
  ## Features
53
 
54
  - **Multimodal Analysis** — Combines skin image + freeform symptom text for richer diagnosis suggestions.
55
+ - **Multilingual** — Full support for 6 languages: English, Tiếng Việt, 中文 (Chinese), Español, Français, and 日本語 (Japanese). All output — diagnosis, severity, recommendations, and disclaimer — is rendered natively in the selected language.
56
  - **Structured Output** — Every analysis returns:
57
  - Diagnosis suggestion
58
  - Severity badge: `Low` · `Medium` · `High` · `Urgent`
59
  - Actionable recommended steps (clinical-grade language)
60
  - Confidence score with visual progress bar
61
  - **AMD MI300X Powered** — Inference via vLLM on AMD Instinct™ MI300X + ROCm.
 
62
  - **HF Space Ready** — Minimal dependencies; no GPU required in the Space.
63
 
64
  ---
 
85
  | Inference Runtime | **vLLM** (OpenAI-compatible API) |
86
  | Inference Hardware | **AMD Instinct™ MI300X** (192 GB HBM3) + ROCm |
87
  | Inference Host | AMD Developer Cloud |
88
+ | Frontend | Gradio 5.29 (Hugging Face Space — CPU) |
89
+ | Language Support | English · Tiếng Việt · 中文 · Español · Français · 日本語 |
90
 
91
  ---
92
 
 
154
 
155
  The app is available at `http://localhost:7860`.
156
 
 
 
 
 
 
 
157
  ---
158
 
159
  ## Starting the vLLM Server (AMD Developer Cloud)
 
181
  | `VLLM_API_URL` | `http://localhost:8000` | Base URL of the vLLM server |
182
  | `MODEL_NAME` | `Qwen/Qwen2.5-VL-7B-Instruct` | Model ID served by vLLM |
183
  | `VLLM_API_KEY` | `not-required` | API key (if vLLM auth is enabled) |
184
+
185
  | `MAX_NEW_TOKENS` | `512` | Max tokens to generate |
186
  | `TEMPERATURE` | `0.2` | Sampling temperature |
187
 
app.py CHANGED
@@ -1,7 +1,6 @@
1
  import gradio as gr
2
  from src.inference import MediVisionPipeline
3
  from src.model_loader import check_connection
4
- import src.config as config
5
 
6
  # ---------------------------------------------------------------------------
7
  # Pipeline singleton
@@ -25,7 +24,7 @@ def get_backend_status_html() -> str:
25
  if connected:
26
  dot, label, color = "#22c55e", "AMD Cloud · Live", "#86efac"
27
  else:
28
- dot, label, color = "#f97316", "Demo Mode", "#fdba74"
29
  return (
30
  f"<div style='text-align:center; margin-bottom:4px;'>"
31
  f"<span style='font-size:0.75rem; color:{color}; font-family:monospace;'>"
@@ -39,14 +38,36 @@ def get_backend_status_html() -> str:
39
  # ---------------------------------------------------------------------------
40
 
41
  _SEVERITY_COLOR = {
42
- "Low": ("#22c55e", "#dcfce7"),
43
- "Medium": ("#eab308", "#fef9c3"),
44
- "High": ("#f97316", "#ffedd5"),
45
- "Urgent": ("#ef4444", "#fee2e2"),
46
- "Thấp": ("#22c55e", "#dcfce7"),
 
 
47
  "Trung bình": ("#eab308", "#fef9c3"),
48
- "Cao": ("#f97316", "#ffedd5"),
49
- "Khẩn cấp": ("#ef4444", "#fee2e2"),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  }
51
 
52
 
@@ -81,42 +102,23 @@ def _build_result_html(result: dict, lang: str) -> str:
81
  actions = result.get("recommended_actions", [])
82
  score = result.get("confidence_score", 0)
83
 
84
- if lang == "vn":
85
- diag_label = "Gợi ý chẩn đoán"
86
- severity_label = "Mức độ nghiêm trọng"
87
- actions_label = "Khuyến nghị"
88
- disclaimer = (
89
- "Đây là trợ lý AI, không phải bác sĩ. "
90
- "Hãy tham khảo chuyên gia y tế cho các tình trạng nghiêm trọng."
91
- )
92
- else:
93
- diag_label = "Diagnosis Suggestion"
94
- severity_label = "Severity"
95
- actions_label = "Recommended Actions"
96
- disclaimer = (
97
- "This is an AI assistant, not a licensed physician. "
98
- "Always consult a healthcare professional for serious conditions."
99
- )
100
 
101
  actions_html = "".join(
102
  f"<li style='margin:5px 0; color:#d1d5db;'>{a}</li>"
103
  for a in actions
104
  ) if actions else "<li style='color:#6b7280;'>—</li>"
105
 
106
- if config.MOCK_MODE:
107
- backend_tag = (
108
- "<span style='font-size:0.7rem; background:#431407; color:#fdba74; "
109
- "padding:2px 8px; border-radius:4px; margin-left:8px; "
110
- "border:1px solid #c2410c;'>Demo Mode</span>"
111
- )
112
- backend_info = "Demo Mode · Representative Response"
113
- else:
114
- backend_tag = (
115
- "<span style='font-size:0.7rem; background:#052e16; color:#86efac; "
116
- "padding:2px 8px; border-radius:4px; margin-left:8px; "
117
- "border:1px solid #16a34a;'>AMD Cloud</span>"
118
- )
119
- backend_info = "AMD MI300X · ROCm · Qwen2.5-VL-7B"
120
 
121
  return f"""
122
  <div style='background:#111827; border:1px solid #ED1C24; border-radius:12px;
@@ -175,22 +177,106 @@ def _build_result_html(result: dict, lang: str) -> str:
175
  # Main prediction function
176
  # ---------------------------------------------------------------------------
177
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
178
  def predict(image, symptoms: str, lang_choice: str):
179
- lang = "vn" if lang_choice == "Tiếng Việt" else "en"
180
 
181
  if not image and not symptoms.strip():
182
- placeholder = (
183
- "Please upload an image or enter symptoms."
184
- if lang == "en"
185
- else "Vui lòng tải lên hình ảnh hoặc nhập triệu chứng."
186
- )
187
  return (
188
  f"<p style='color:#9ca3af; text-align:center;'>{placeholder}</p>",
189
  get_backend_status_html(),
190
  )
191
 
192
- result = get_pipeline().process(image, symptoms.strip(), lang=lang)
193
- return _build_result_html(result, lang), get_backend_status_html()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
194
 
195
 
196
  # ---------------------------------------------------------------------------
@@ -257,7 +343,7 @@ HEADER_HTML = """
257
  <span style='color:#ED1C24;'>Medi</span><span style='color:#f9fafb;'>Vision</span>
258
  </div>
259
  <div style='color:#9ca3af; font-size:0.9rem; margin-top:4px;'>
260
- Multimodal Medical Imaging AI Agent
261
  </div>
262
  <div style='margin-top:10px; display:inline-flex; gap:8px; flex-wrap:wrap;
263
  justify-content:center;'>
@@ -269,6 +355,10 @@ HEADER_HTML = """
269
  padding:3px 12px; border-radius:999px; border:1px solid #374151;'>
270
  ROCm · Qwen2.5-VL-7B
271
  </span>
 
 
 
 
272
  <span style='background:#1f2937; color:#9ca3af; font-size:0.72rem; font-weight:600;
273
  padding:3px 12px; border-radius:999px; border:1px solid #374151;'>
274
  AMD Developer Hackathon 2026
@@ -312,8 +402,8 @@ with gr.Blocks(css=CSS, theme=gr.themes.Base(), title="MediVision — AMD MI300X
312
  ),
313
  lines=4,
314
  )
315
- lang_radio = gr.Radio(
316
- choices=["English", "Tiếng Việt"],
317
  value="English",
318
  label="Language / Ngôn ngữ",
319
  )
@@ -328,7 +418,10 @@ with gr.Blocks(css=CSS, theme=gr.themes.Base(), title="MediVision — AMD MI300X
328
  [None, "I have a red, itchy rash on my forearm for 3 days. It burns slightly.", "English"],
329
  [None, "Small wound on my hand after a cut, slightly swollen with some redness.", "English"],
330
  [None, "Vết thương nhỏ ở bàn tay, hơi sưng và có dấu hiệu đỏ xung quanh.", "Tiếng Việt"],
331
- [None, "Phát ban đỏ trên cánh tay, ngứa nhiều, đã 2 ngày.", "Tiếng Việt"],
 
 
 
332
  ],
333
  inputs=[input_img, symptoms_txt, lang_radio],
334
  label="Quick Examples / Ví dụ nhanh",
 
1
  import gradio as gr
2
  from src.inference import MediVisionPipeline
3
  from src.model_loader import check_connection
 
4
 
5
  # ---------------------------------------------------------------------------
6
  # Pipeline singleton
 
24
  if connected:
25
  dot, label, color = "#22c55e", "AMD Cloud · Live", "#86efac"
26
  else:
27
+ dot, label, color = "#ef4444", "AMD Cloud · Offline", "#fca5a5"
28
  return (
29
  f"<div style='text-align:center; margin-bottom:4px;'>"
30
  f"<span style='font-size:0.75rem; color:{color}; font-family:monospace;'>"
 
38
  # ---------------------------------------------------------------------------
39
 
40
  _SEVERITY_COLOR = {
41
+ # English
42
+ "Low": ("#22c55e", "#dcfce7"),
43
+ "Medium": ("#eab308", "#fef9c3"),
44
+ "High": ("#f97316", "#ffedd5"),
45
+ "Urgent": ("#ef4444", "#fee2e2"),
46
+ # Vietnamese
47
+ "Thấp": ("#22c55e", "#dcfce7"),
48
  "Trung bình": ("#eab308", "#fef9c3"),
49
+ "Cao": ("#f97316", "#ffedd5"),
50
+ "Khẩn cấp": ("#ef4444", "#fee2e2"),
51
+ # Chinese
52
+ "低": ("#22c55e", "#dcfce7"),
53
+ "中": ("#eab308", "#fef9c3"),
54
+ "高": ("#f97316", "#ffedd5"),
55
+ "紧急": ("#ef4444", "#fee2e2"),
56
+ # Spanish
57
+ "Baja": ("#22c55e", "#dcfce7"),
58
+ "Media": ("#eab308", "#fef9c3"),
59
+ "Alta": ("#f97316", "#ffedd5"),
60
+ "Urgente": ("#ef4444", "#fee2e2"),
61
+ # French
62
+ "Faible": ("#22c55e", "#dcfce7"),
63
+ "Modérée": ("#eab308", "#fef9c3"),
64
+ "Élevée": ("#f97316", "#ffedd5"),
65
+ "Urgente": ("#ef4444", "#fee2e2"),
66
+ # Japanese
67
+ "軽度": ("#22c55e", "#dcfce7"),
68
+ "中等度": ("#eab308", "#fef9c3"),
69
+ "重度": ("#f97316", "#ffedd5"),
70
+ "緊急": ("#ef4444", "#fee2e2"),
71
  }
72
 
73
 
 
102
  actions = result.get("recommended_actions", [])
103
  score = result.get("confidence_score", 0)
104
 
105
+ t = _I18N.get(lang, _I18N["en"])
106
+ diag_label = t["diag_label"]
107
+ severity_label = t["severity_label"]
108
+ actions_label = t["actions_label"]
109
+ disclaimer = t["disclaimer"]
 
 
 
 
 
 
 
 
 
 
 
110
 
111
  actions_html = "".join(
112
  f"<li style='margin:5px 0; color:#d1d5db;'>{a}</li>"
113
  for a in actions
114
  ) if actions else "<li style='color:#6b7280;'>—</li>"
115
 
116
+ backend_tag = (
117
+ "<span style='font-size:0.7rem; background:#052e16; color:#86efac; "
118
+ "padding:2px 8px; border-radius:4px; margin-left:8px; "
119
+ "border:1px solid #16a34a;'>AMD Cloud</span>"
120
+ )
121
+ backend_info = "AMD MI300X · ROCm · Qwen2.5-VL-7B"
 
 
 
 
 
 
 
 
122
 
123
  return f"""
124
  <div style='background:#111827; border:1px solid #ED1C24; border-radius:12px;
 
177
  # Main prediction function
178
  # ---------------------------------------------------------------------------
179
 
180
+ _LANG_MAP = {
181
+ "English": "en",
182
+ "Tiếng Việt": "vn",
183
+ "中文": "zh",
184
+ "Español": "es",
185
+ "Français": "fr",
186
+ "日本語": "ja",
187
+ }
188
+
189
+ _I18N = {
190
+ "en": {
191
+ "diag_label": "Diagnosis Suggestion",
192
+ "severity_label": "Severity",
193
+ "actions_label": "Recommended Actions",
194
+ "disclaimer": (
195
+ "This is an AI assistant, not a licensed physician. "
196
+ "Always consult a healthcare professional for serious conditions."
197
+ ),
198
+ "placeholder": "Please upload an image or enter symptoms.",
199
+ },
200
+ "vn": {
201
+ "diag_label": "Gợi ý chẩn đoán",
202
+ "severity_label": "Mức độ nghiêm trọng",
203
+ "actions_label": "Khuyến nghị",
204
+ "disclaimer": (
205
+ "Đây là trợ lý AI, không phải bác sĩ. "
206
+ "Hãy tham khảo chuyên gia y tế cho các tình trạng nghiêm trọng."
207
+ ),
208
+ "placeholder": "Vui lòng tải lên hình ảnh hoặc nhập triệu chứng.",
209
+ },
210
+ "zh": {
211
+ "diag_label": "诊断建议",
212
+ "severity_label": "严重程度",
213
+ "actions_label": "推荐措施",
214
+ "disclaimer": (
215
+ "本工具为AI助手,不能替代执业医师。"
216
+ "如有严重病情,请务必咨询专业医疗人员。"
217
+ ),
218
+ "placeholder": "请上传图片或输入症状描述。",
219
+ },
220
+ "es": {
221
+ "diag_label": "Sugerencia de diagnóstico",
222
+ "severity_label": "Severidad",
223
+ "actions_label": "Acciones recomendadas",
224
+ "disclaimer": (
225
+ "Este es un asistente de IA, no un médico autorizado. "
226
+ "Consulte siempre a un profesional de la salud para condiciones graves."
227
+ ),
228
+ "placeholder": "Por favor, suba una imagen o describa sus síntomas.",
229
+ },
230
+ "fr": {
231
+ "diag_label": "Suggestion de diagnostic",
232
+ "severity_label": "Sévérité",
233
+ "actions_label": "Actions recommandées",
234
+ "disclaimer": (
235
+ "Ceci est un assistant IA, pas un médecin agréé. "
236
+ "Consultez toujours un professionnel de santé pour les situations graves."
237
+ ),
238
+ "placeholder": "Veuillez télécharger une image ou décrire vos symptômes.",
239
+ },
240
+ "ja": {
241
+ "diag_label": "診断提案",
242
+ "severity_label": "重症度",
243
+ "actions_label": "推奨アクション",
244
+ "disclaimer": (
245
+ "これはAIアシスタントであり、有資格の医師ではありません。"
246
+ "深刻な症状については必ず医療専門家に相談してください。"
247
+ ),
248
+ "placeholder": "画像をアップロードするか、症状を入力してください。",
249
+ },
250
+ }
251
+
252
+
253
  def predict(image, symptoms: str, lang_choice: str):
254
+ lang = _LANG_MAP.get(lang_choice, "en")
255
 
256
  if not image and not symptoms.strip():
257
+ placeholder = _I18N.get(lang, _I18N["en"])["placeholder"]
 
 
 
 
258
  return (
259
  f"<p style='color:#9ca3af; text-align:center;'>{placeholder}</p>",
260
  get_backend_status_html(),
261
  )
262
 
263
+ try:
264
+ result = get_pipeline().process(image, symptoms.strip(), lang=lang)
265
+ return _build_result_html(result, lang), get_backend_status_html()
266
+ except Exception as exc:
267
+ error_html = (
268
+ "<div style='background:#111827; border:1px solid #ef4444; border-radius:12px;"
269
+ "padding:24px; font-family:Arial,sans-serif; text-align:center;'>"
270
+ "<div style='font-size:1.5rem; margin-bottom:12px;'>⚠️</div>"
271
+ "<div style='font-size:1rem; font-weight:700; color:#ef4444; margin-bottom:8px;'>"
272
+ "Backend Unavailable / Hệ thống không khả dụng</div>"
273
+ "<div style='font-size:0.85rem; color:#9ca3af; margin-bottom:16px;'>"
274
+ "AMD Cloud backend is unreachable. Please try again later.</div>"
275
+ f"<div style='font-size:0.75rem; color:#6b7280; font-family:monospace;"
276
+ f"background:#1f2937; padding:8px 12px; border-radius:6px;'>{exc}</div>"
277
+ "</div>"
278
+ )
279
+ return error_html, get_backend_status_html()
280
 
281
 
282
  # ---------------------------------------------------------------------------
 
343
  <span style='color:#ED1C24;'>Medi</span><span style='color:#f9fafb;'>Vision</span>
344
  </div>
345
  <div style='color:#9ca3af; font-size:0.9rem; margin-top:4px;'>
346
+ Multilingual Multimodal Medical Imaging AI Agent
347
  </div>
348
  <div style='margin-top:10px; display:inline-flex; gap:8px; flex-wrap:wrap;
349
  justify-content:center;'>
 
355
  padding:3px 12px; border-radius:999px; border:1px solid #374151;'>
356
  ROCm · Qwen2.5-VL-7B
357
  </span>
358
+ <span style='background:#1f2937; color:#9ca3af; font-size:0.72rem; font-weight:600;
359
+ padding:3px 12px; border-radius:999px; border:1px solid #374151;'>
360
+ EN · VI · ZH · ES · FR · JA
361
+ </span>
362
  <span style='background:#1f2937; color:#9ca3af; font-size:0.72rem; font-weight:600;
363
  padding:3px 12px; border-radius:999px; border:1px solid #374151;'>
364
  AMD Developer Hackathon 2026
 
402
  ),
403
  lines=4,
404
  )
405
+ lang_radio = gr.Dropdown(
406
+ choices=["English", "Tiếng Việt", "中文", "Español", "Français", "日本語"],
407
  value="English",
408
  label="Language / Ngôn ngữ",
409
  )
 
418
  [None, "I have a red, itchy rash on my forearm for 3 days. It burns slightly.", "English"],
419
  [None, "Small wound on my hand after a cut, slightly swollen with some redness.", "English"],
420
  [None, "Vết thương nhỏ ở bàn tay, hơi sưng và có dấu hiệu đỏ xung quanh.", "Tiếng Việt"],
421
+ [None, "手臂上出现红色瘙痒皮疹,已持续3天,略有灼热感。", "中文"],
422
+ [None, "Tengo una erupción roja y con picazón en el antebrazo desde hace 3 días.", "Español"],
423
+ [None, "J'ai une éruption rouge et prurigineuse sur l'avant-bras depuis 3 jours.", "Français"],
424
+ [None, "3日前から前腕に赤くてかゆい発疹があり、少し灼熱感があります。", "日本語"],
425
  ],
426
  inputs=[input_img, symptoms_txt, lang_radio],
427
  label="Quick Examples / Ví dụ nhanh",
src/agent.py CHANGED
@@ -1,207 +1,21 @@
1
- import random
2
  import re
3
- import src.config as config
4
  from src.model_loader import generate_response
5
 
6
- # ---------------------------------------------------------------------------
7
- # Mock data pools — realistic enough for a hackathon demo
8
- # ---------------------------------------------------------------------------
9
 
10
- _MOCK_EN = [
11
- {
12
- "diagnosis": "Contact Dermatitis",
13
- "severity": "Low",
14
- "confidence_score": 82,
15
- "recommended_actions": [
16
- "Avoid contact with suspected allergen or irritant.",
17
- "Apply a mild hydrocortisone cream (1%) to the affected area twice daily.",
18
- "Use fragrance-free moisturiser after bathing to restore the skin barrier.",
19
- "Take an over-the-counter antihistamine (e.g., cetirizine) if itching is severe.",
20
- "Monitor for spreading redness or signs of secondary infection; see a doctor if worsening.",
21
- ],
22
- },
23
- {
24
- "diagnosis": "Mild Abrasion / Superficial Wound",
25
- "severity": "Low",
26
- "confidence_score": 88,
27
- "recommended_actions": [
28
- "Gently clean the wound with sterile saline or clean running water for 5–10 minutes.",
29
- "Apply a thin layer of antibiotic ointment (e.g., bacitracin) to prevent infection.",
30
- "Cover with a non-stick sterile dressing; change daily or when soiled.",
31
- "Watch for signs of infection: increased redness, warmth, purulent discharge, or fever.",
32
- "Tetanus prophylaxis: verify vaccination is up to date (within 5 years for dirty wounds).",
33
- ],
34
- },
35
- {
36
- "diagnosis": "Atopic Eczema (Eczema Flare)",
37
- "severity": "Medium",
38
- "confidence_score": 75,
39
- "recommended_actions": [
40
- "Apply a prescription-strength topical corticosteroid (e.g., triamcinolone 0.1%) to inflamed areas.",
41
- "Use an emollient at least twice daily on the entire body, not just affected skin.",
42
- "Identify and eliminate triggers: dust mites, pet dander, harsh soaps, or stress.",
43
- "Wear loose-fitting, breathable cotton clothing to minimise irritation.",
44
- "Consult a dermatologist if the flare does not improve within 1–2 weeks.",
45
- ],
46
- },
47
- {
48
- "diagnosis": "Partial-Thickness Burn (Second-Degree)",
49
- "severity": "High",
50
- "confidence_score": 79,
51
- "recommended_actions": [
52
- "Cool the burn immediately under cool (not ice-cold) running water for at least 20 minutes.",
53
- "Do NOT apply butter, toothpaste, or home remedies — these increase infection risk.",
54
- "Cover loosely with a clean non-fluffy material (e.g., cling film or sterile dressing).",
55
- "Seek prompt medical evaluation; burns larger than a palm or on the face/hands require ER care.",
56
- "Pain management: ibuprofen or paracetamol as directed.",
57
- ],
58
- },
59
- {
60
- "diagnosis": "Suspected Cellulitis",
61
- "severity": "Urgent",
62
- "confidence_score": 71,
63
- "recommended_actions": [
64
- "Seek immediate medical attention — cellulitis can spread rapidly and become systemic.",
65
- "Do not massage or heat the affected area.",
66
- "Elevate the limb above heart level to reduce oedema.",
67
- "A course of oral antibiotics (e.g., cephalexin or amoxicillin-clavulanate) is typically required.",
68
- "Return to ER if red streaking, fever > 38.5 °C, or rapid area expansion occurs.",
69
- ],
70
- },
71
- {
72
- "diagnosis": "Tinea Corporis (Ringworm)",
73
- "severity": "Low",
74
- "confidence_score": 85,
75
- "recommended_actions": [
76
- "Apply an antifungal cream (e.g., clotrimazole 1% or terbinafine) twice daily for 2–4 weeks.",
77
- "Keep the area clean and dry; fungi thrive in moist environments.",
78
- "Avoid sharing towels, clothing, or bedding during active infection.",
79
- "Wash clothing and bed linen at 60 °C to eliminate spores.",
80
- "See a GP if there is no improvement after 4 weeks — oral antifungal may be needed.",
81
- ],
82
- },
83
- {
84
- "diagnosis": "Psoriasis Plaque",
85
- "severity": "Medium",
86
- "confidence_score": 73,
87
- "recommended_actions": [
88
- "Moisturise heavily with thick emollients (e.g., petroleum jelly) after bathing.",
89
- "A topical vitamin D analogue (e.g., calcipotriol) or mild steroid may be prescribed.",
90
- "Avoid known triggers: stress, alcohol, smoking, and certain medications (e.g., beta-blockers).",
91
- "Phototherapy (UVB) is effective for extensive plaques — discuss with a dermatologist.",
92
- "Biologic therapy may be appropriate for moderate-to-severe disease unresponsive to topicals.",
93
- ],
94
- },
95
- ]
96
-
97
- _MOCK_VN = [
98
- {
99
- "diagnosis": "Viêm da tiếp xúc",
100
- "severity": "Thấp",
101
- "confidence_score": 82,
102
- "recommended_actions": [
103
- "Tránh tiếp xúc với chất gây dị ứng hoặc kích ứng nghi ngờ.",
104
- "Bôi kem hydrocortisone nhẹ (1%) lên vùng bị ảnh hưởng hai lần mỗi ngày.",
105
- "Dùng kem dưỡng ẩm không hương liệu sau khi tắm để phục hồi hàng rào da.",
106
- "Uống thuốc kháng histamine (ví dụ: cetirizine) nếu ngứa nghiêm trọng.",
107
- "Theo dõi xem có lan rộng đỏ hoặc dấu hiệu nhiễm trùng thứ phát không; gặp bác sĩ nếu nặng hơn.",
108
- ],
109
- },
110
- {
111
- "diagnosis": "Trầy xước nhẹ / Vết thương nông",
112
- "severity": "Thấp",
113
- "confidence_score": 88,
114
- "recommended_actions": [
115
- "Nhẹ nhàng làm sạch vết thương bằng nước muối sinh lý hoặc nước sạch trong 5–10 phút.",
116
- "Bôi một lớp mỏng thuốc mỡ kháng sinh (ví dụ: bacitracin) để ngăn ngừa nhiễm trùng.",
117
- "Băng bó bằng gạc vô khuẩn không dính; thay hàng ngày hoặc khi bẩn.",
118
- "Theo dõi dấu hiệu nhiễm trùng: đỏ tăng, nóng, mủ chảy ra hoặc sốt.",
119
- "Phòng uốn ván: kiểm tra lịch tiêm chủng có còn hiệu lực (trong vòng 5 năm với vết thương bẩn).",
120
- ],
121
- },
122
- {
123
- "diagnosis": "Chàm dị ứng (Đợt bùng phát)",
124
- "severity": "Trung bình",
125
- "confidence_score": 75,
126
- "recommended_actions": [
127
- "Bôi corticosteroid tại chỗ theo toa (ví dụ: triamcinolone 0.1%) lên vùng viêm.",
128
- "Dùng thuốc dưỡng ẩm ít nhất hai lần mỗi ngày trên toàn cơ thể, không chỉ vùng bị ảnh hưởng.",
129
- "Xác định và loại bỏ các tác nhân kích thích: bụi, lông thú cưng, xà phòng mạnh hoặc căng thẳng.",
130
- "Mặc quần áo cotton rộng rãi, thoáng khí để giảm kích ứng.",
131
- "Tham khảo bác sĩ da liễu nếu đợt bùng phát không cải thiện trong vòng 1–2 tuần.",
132
- ],
133
- },
134
- {
135
- "diagnosis": "Bỏng độ hai (Bỏng lớp bì)",
136
- "severity": "Cao",
137
- "confidence_score": 79,
138
- "recommended_actions": [
139
- "Làm mát vết bỏng ngay bằng nước mát (không phải nước đá) trong ít nhất 20 phút.",
140
- "KHÔNG bôi bơ, kem đánh răng hoặc các biện pháp dân gian — sẽ tăng nguy cơ nhiễm trùng.",
141
- "Che phủ nhẹ nhàng bằng vải sạch không xơ (ví dụ: màng bọc thực phẩm hoặc gạc vô khuẩn).",
142
- "Tìm kiếm đánh giá y tế ngay; bỏng lớn hơn lòng bàn tay hoặc ở mặt/tay cần đến cấp cứu.",
143
- "Giảm đau: ibuprofen hoặc paracetamol theo chỉ định.",
144
- ],
145
- },
146
- {
147
- "diagnosis": "Nghi ngờ viêm mô tế bào (Cellulitis)",
148
- "severity": "Khẩn cấp",
149
- "confidence_score": 71,
150
- "recommended_actions": [
151
- "Tìm kiếm sự chú ý y tế ngay lập tức — viêm mô tế bào có thể lan rộng nhanh chóng.",
152
- "Không xoa bóp hoặc chườm nóng vùng bị ảnh hưởng.",
153
- "Nâng cao chi trên mức tim để giảm phù nề.",
154
- "Thường cần một đợt kháng sinh uống (ví dụ: cephalexin hoặc amoxicillin-clavulanate).",
155
- "Quay lại cấp cứu nếu có vết đỏ lan rộng, sốt > 38.5 °C hoặc vùng bị ảnh hưởng mở rộng nhanh.",
156
- ],
157
- },
158
- {
159
- "diagnosis": "Nấm da (Ringworm / Tinea corporis)",
160
- "severity": "Thấp",
161
- "confidence_score": 85,
162
- "recommended_actions": [
163
- "Bôi kem chống nấm (ví dụ: clotrimazole 1% hoặc terbinafine) hai lần mỗi ngày trong 2–4 tuần.",
164
- "Giữ vùng da sạch và khô; nấm phát triển mạnh trong môi trường ẩm ướt.",
165
- "Tránh dùng chung khăn, quần áo hoặc chăn ga trong thời gian nhiễm trùng.",
166
- "Giặt quần áo và ga trải giường ở nhiệt độ 60 °C để tiêu diệt bào tử.",
167
- "Gặp bác sĩ nếu không cải thiện sau 4 tuần — có thể cần thuốc chống nấm uống.",
168
- ],
169
- },
170
- {
171
- "diagnosis": "Mảng vảy nến (Psoriasis Plaque)",
172
- "severity": "Trung bình",
173
- "confidence_score": 73,
174
- "recommended_actions": [
175
- "Dưỡng ẩm nhiều bằng các chất nhũ hóa dày (ví dụ: vaseline) sau khi tắm.",
176
- "Có thể kê đơn thuốc tương tự vitamin D tại chỗ (ví dụ: calcipotriol) hoặc steroid nhẹ.",
177
- "Tránh các yếu tố kích thích đã biết: căng thẳng, rượu, hút thuốc và một số thuốc.",
178
- "Quang trị liệu (UVB) có hiệu quả với các mảng rộng — thảo luận với bác sĩ da liễu.",
179
- "Liệu pháp sinh học có thể phù hợp với bệnh từ vừa đến nặng không đáp ứng với thuốc tại chỗ.",
180
- ],
181
- },
182
- ]
183
-
184
- _SEVERITY_ORDER = {"Low": 0, "Thấp": 0, "Medium": 1, "Trung bình": 1,
185
- "High": 2, "Cao": 2, "Urgent": 3, "Khẩn cấp": 3}
186
-
187
-
188
- def _add_variance(base_score: int) -> int:
189
- """Vary the confidence score ±5 points within [65, 95]."""
190
- return max(65, min(95, base_score + random.randint(-5, 5)))
191
-
192
-
193
- def _mock_response(lang: str) -> dict:
194
- pool = _MOCK_VN if lang == "vn" else _MOCK_EN
195
- entry = random.choice(pool).copy()
196
- entry["confidence_score"] = _add_variance(entry["confidence_score"])
197
- return entry
198
 
199
 
200
  def _build_prompt(image_path: str | None, text_description: str, lang: str) -> str:
201
- lang_instruction = (
202
- "Respond in Vietnamese." if lang == "vn"
203
- else "Respond in English."
204
- )
205
  has_image = bool(image_path)
206
  return (
207
  "You are MediVision, a professional dermatology and wound-care assistant.\n"
@@ -216,14 +30,7 @@ def _build_prompt(image_path: str | None, text_description: str, lang: str) -> s
216
  )
217
 
218
 
219
- def _parse_real_response(raw: str, lang: str) -> dict:
220
- """
221
- Try to extract JSON from the model's raw output.
222
- Falls back to a structured mock on parse failure.
223
- """
224
- import json
225
-
226
- # Find the first JSON object in the output
227
  match = re.search(r'\{.*\}', raw, re.DOTALL)
228
  if match:
229
  try:
@@ -236,11 +43,7 @@ def _parse_real_response(raw: str, lang: str) -> dict:
236
  }
237
  except (json.JSONDecodeError, ValueError):
238
  pass
239
-
240
- # Couldn't parse — use mock with a note
241
- result = _mock_response(lang)
242
- result["diagnosis"] = f"[Parse error — showing representative result] {result['diagnosis']}"
243
- return result
244
 
245
 
246
  def analyze_image_and_text(
@@ -249,26 +52,11 @@ def analyze_image_and_text(
249
  language: str = "en",
250
  ) -> dict:
251
  """
252
- Main analysis entry point.
253
-
254
- Returns a dict:
255
- {
256
- "diagnosis": str,
257
- "severity": str, # Low | Medium | High | Urgent
258
- "recommended_actions": list[str],
259
- "confidence_score": int, # 0-100
260
- }
261
  """
262
  lang = language.lower()
263
-
264
- if config.MOCK_MODE:
265
- return _mock_response(lang)
266
-
267
  prompt = _build_prompt(image_path, text_description, lang)
268
  raw = generate_response(prompt, image_path=image_path)
269
-
270
- if raw is None:
271
- # model_loader set MOCK_MODE=True due to load failure
272
- return _mock_response(lang)
273
-
274
- return _parse_real_response(raw, lang)
 
1
+ import json
2
  import re
 
3
  from src.model_loader import generate_response
4
 
5
+ # Supported language codes: en, vn, zh, es, fr, ja
 
 
6
 
7
+ _LANG_INSTRUCTIONS = {
8
+ "en": "Respond in English.",
9
+ "vn": "Respond in Vietnamese (Tiếng Việt).",
10
+ "zh": "Respond in Simplified Chinese (简体中文).",
11
+ "es": "Respond in Spanish (Español).",
12
+ "fr": "Respond in French (Français).",
13
+ "ja": "Respond in Japanese (日本語).",
14
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
 
17
  def _build_prompt(image_path: str | None, text_description: str, lang: str) -> str:
18
+ lang_instruction = _LANG_INSTRUCTIONS.get(lang, _LANG_INSTRUCTIONS["en"])
 
 
 
19
  has_image = bool(image_path)
20
  return (
21
  "You are MediVision, a professional dermatology and wound-care assistant.\n"
 
30
  )
31
 
32
 
33
+ def _parse_response(raw: str) -> dict:
 
 
 
 
 
 
 
34
  match = re.search(r'\{.*\}', raw, re.DOTALL)
35
  if match:
36
  try:
 
43
  }
44
  except (json.JSONDecodeError, ValueError):
45
  pass
46
+ raise ValueError(f"Could not parse model response as JSON: {raw[:200]}")
 
 
 
 
47
 
48
 
49
  def analyze_image_and_text(
 
52
  language: str = "en",
53
  ) -> dict:
54
  """
55
+ Run analysis via AMD Cloud backend.
56
+ Raises RuntimeError if the backend is unreachable.
57
+ Raises ValueError if the model response cannot be parsed.
 
 
 
 
 
 
58
  """
59
  lang = language.lower()
 
 
 
 
60
  prompt = _build_prompt(image_path, text_description, lang)
61
  raw = generate_response(prompt, image_path=image_path)
62
+ return _parse_response(raw)
 
 
 
 
 
src/config.py CHANGED
@@ -5,9 +5,6 @@ VLLM_API_URL = os.environ.get("VLLM_API_URL", "http://localhost:8000")
5
 
6
  MODEL_NAME = os.environ.get("MODEL_NAME", "Qwen/Qwen2.5-VL-7B-Instruct")
7
 
8
- # Set to True to force mock mode (skip vLLM calls entirely)
9
- MOCK_MODE = os.environ.get("MOCK_MODE", "false").lower() == "true"
10
-
11
  # Generation settings
12
  MAX_NEW_TOKENS = int(os.environ.get("MAX_NEW_TOKENS", "512"))
13
  TEMPERATURE = float(os.environ.get("TEMPERATURE", "0.2"))
 
5
 
6
  MODEL_NAME = os.environ.get("MODEL_NAME", "Qwen/Qwen2.5-VL-7B-Instruct")
7
 
 
 
 
8
  # Generation settings
9
  MAX_NEW_TOKENS = int(os.environ.get("MAX_NEW_TOKENS", "512"))
10
  TEMPERATURE = float(os.environ.get("TEMPERATURE", "0.2"))
src/inference.py CHANGED
@@ -5,23 +5,11 @@ class MediVisionPipeline:
5
  def process(self, image_path, symptoms: str, lang: str = "en") -> dict:
6
  """
7
  Run the full analysis pipeline.
 
8
 
9
  Returns:
10
  dict with keys: diagnosis, severity, recommended_actions, confidence_score
11
  """
12
- if not image_path and not symptoms.strip():
13
- placeholder = (
14
- "Please upload an image or describe your symptoms."
15
- if lang == "en"
16
- else "Vui lòng tải lên hình ảnh hoặc mô tả triệu chứng của bạn."
17
- )
18
- return {
19
- "diagnosis": placeholder,
20
- "severity": "Low",
21
- "recommended_actions": [],
22
- "confidence_score": 0,
23
- }
24
-
25
  return analyze_image_and_text(
26
  image_path=image_path,
27
  text_description=symptoms,
 
5
  def process(self, image_path, symptoms: str, lang: str = "en") -> dict:
6
  """
7
  Run the full analysis pipeline.
8
+ Raises RuntimeError if the AMD Cloud backend is unreachable.
9
 
10
  Returns:
11
  dict with keys: diagnosis, severity, recommended_actions, confidence_score
12
  """
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  return analyze_image_and_text(
14
  image_path=image_path,
15
  text_description=symptoms,
src/model_loader.py CHANGED
@@ -36,10 +36,6 @@ def check_connection() -> tuple[bool, str]:
36
  Ping the vLLM server's /v1/models endpoint.
37
  Returns (is_connected: bool, status_message: str).
38
  """
39
- if config.MOCK_MODE:
40
- print("[Connection] MOCK_MODE=true — skipping connection check.")
41
- return False, "Mock mode enabled"
42
-
43
  import requests as req
44
 
45
  url = f"{config.VLLM_API_URL}/v1/models"
@@ -58,21 +54,18 @@ def check_connection() -> tuple[bool, str]:
58
  print(f"[Connection] FAILED — ConnectionError: {exc}")
59
  return False, f"ConnectionError: {exc}"
60
  except req.exceptions.Timeout:
61
- print(f"[Connection] FAILED — Timeout after 5s (server may be down or IP wrong)")
62
  return False, "Timeout (5s)"
63
  except Exception as exc:
64
- print(f"[Connection] FAILED — Unexpected error: {type(exc).__name__}: {exc}")
65
  return False, f"{type(exc).__name__}: {exc}"
66
 
67
 
68
- def generate_response(prompt: str, image_path: str = None) -> str | None:
69
  """
70
  Send a request to the vLLM endpoint and return the model's text output.
71
- Returns None on failure so callers fall back to mock logic.
72
  """
73
- if config.MOCK_MODE:
74
- return None
75
-
76
  try:
77
  client = _get_client()
78
 
@@ -84,9 +77,7 @@ def generate_response(prompt: str, image_path: str = None) -> str | None:
84
  "content": [
85
  {
86
  "type": "image_url",
87
- "image_url": {
88
- "url": f"data:{mime};base64,{b64}",
89
- },
90
  },
91
  {"type": "text", "text": prompt},
92
  ],
@@ -104,6 +95,4 @@ def generate_response(prompt: str, image_path: str = None) -> str | None:
104
  return response.choices[0].message.content
105
 
106
  except Exception as exc:
107
- print(f"[ModelLoader] vLLM call failed ({exc}). Falling back to mock mode.")
108
- config.MOCK_MODE = True
109
- return None
 
36
  Ping the vLLM server's /v1/models endpoint.
37
  Returns (is_connected: bool, status_message: str).
38
  """
 
 
 
 
39
  import requests as req
40
 
41
  url = f"{config.VLLM_API_URL}/v1/models"
 
54
  print(f"[Connection] FAILED — ConnectionError: {exc}")
55
  return False, f"ConnectionError: {exc}"
56
  except req.exceptions.Timeout:
57
+ print(f"[Connection] FAILED — Timeout after 5s")
58
  return False, "Timeout (5s)"
59
  except Exception as exc:
60
+ print(f"[Connection] FAILED — {type(exc).__name__}: {exc}")
61
  return False, f"{type(exc).__name__}: {exc}"
62
 
63
 
64
+ def generate_response(prompt: str, image_path: str = None) -> str:
65
  """
66
  Send a request to the vLLM endpoint and return the model's text output.
67
+ Raises RuntimeError if the backend is unreachable or returns an error.
68
  """
 
 
 
69
  try:
70
  client = _get_client()
71
 
 
77
  "content": [
78
  {
79
  "type": "image_url",
80
+ "image_url": {"url": f"data:{mime};base64,{b64}"},
 
 
81
  },
82
  {"type": "text", "text": prompt},
83
  ],
 
95
  return response.choices[0].message.content
96
 
97
  except Exception as exc:
98
+ raise RuntimeError(f"AMD Cloud backend unreachable: {exc}") from exc
 
 
test_mock_pipeline.py DELETED
@@ -1,20 +0,0 @@
1
- from src.inference import MediVisionPipeline
2
- import os
3
-
4
- def test_pipeline():
5
- print("Initializing Pipeline...")
6
- pipeline = MediVisionPipeline()
7
- print("Pipeline Initialized.")
8
-
9
- image_path = None # Testing without image first
10
- symptoms = "I have a red rash on my arm that is itchy."
11
-
12
- print(f"Testing with symptoms: {symptoms}")
13
- result = pipeline.process(image_path, symptoms, lang="en")
14
- print(f"Result (EN): {result}")
15
-
16
- result_vn = pipeline.process(image_path, symptoms, lang="vn")
17
- print(f"Result (VN): {result_vn}")
18
-
19
- if __name__ == "__main__":
20
- test_pipeline()