mekosotto Claude Opus 4.7 (1M context) commited on
Commit
decc9ff
·
1 Parent(s): dc31dba

feat(llm): user-question-driven prompt (language match + intent split)

Browse files

The previous prompt forced every LLM call into "paper-style rationale"
mode, ignoring the user's actual question. Result: Turkish questions
got English paper-prose answers, casual greetings got SHAP rationales,
"what does 93% mean" got the same canned SHAP citation.

New prompt:
- Puts the user's question above the data, not below it.
- When a non-default question is supplied: instructs the model to
match the question's language, answer it directly, and reply
conversationally if the question is casual.
- When no question is supplied (default /explain caller): preserves
the original 2-4 sentence paper-style rationale behavior.

11/11 LLM unit tests pass; template path untouched.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Files changed (1) hide show
  1. src/llm/explainer.py +29 -6
src/llm/explainer.py CHANGED
@@ -212,7 +212,12 @@ def _build_llm_prompt(payload: ExplainPayload, modality: str = "bbb") -> str:
212
  ),
213
  }
214
  header = headers.get(modality, headers["bbb"])
215
- user_q = payload.get("user_question") or "Explain the result in 2-4 sentences."
 
 
 
 
 
216
  body_lines: list[str] = []
217
  if modality == "bbb":
218
  top_features = payload.get("top_features") or []
@@ -251,13 +256,31 @@ def _build_llm_prompt(payload: ExplainPayload, modality: str = "bbb") -> str:
251
  # fallback uses BBB-shape prompt
252
  body_lines.append(f"Payload: {payload!r}")
253
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
254
  return (
255
- f"{header} Given the details below, write a 2-4 sentence rationale a "
256
- f"researcher could paste into a paper. Avoid hedging; be specific "
257
- f"about the numbers.\n\n"
258
- f"{body_lines[0]}\n\n"
259
  f"User question: {user_q}\n\n"
260
- f"Respond with the rationale only, no preamble."
 
261
  )
262
 
263
 
 
212
  ),
213
  }
214
  header = headers.get(modality, headers["bbb"])
215
+ raw_q = (payload.get("user_question") or "").strip()
216
+ # When the caller did not supply a question, default to the paper-style
217
+ # rationale prompt; this preserves the original behavior for /explain
218
+ # callers that just want a one-shot summary.
219
+ user_q = raw_q or "Explain the result in 2-4 sentences."
220
+ has_explicit_question = bool(raw_q)
221
  body_lines: list[str] = []
222
  if modality == "bbb":
223
  top_features = payload.get("top_features") or []
 
256
  # fallback uses BBB-shape prompt
257
  body_lines.append(f"Payload: {payload!r}")
258
 
259
+ if has_explicit_question:
260
+ instructions = (
261
+ "Instructions:\n"
262
+ "- Respond in the SAME LANGUAGE as the user's question above "
263
+ "(Turkish question → Turkish answer, English → English, etc.).\n"
264
+ "- Directly answer the user's question using the data below; "
265
+ "do not default to a generic paper-style summary unless they "
266
+ "asked for one.\n"
267
+ "- If the question is conversational or off-topic (e.g. a "
268
+ "greeting), reply briefly and conversationally — do not force "
269
+ "a clinical rationale.\n"
270
+ "- Cite specific numbers from the data when relevant.\n"
271
+ "- No preamble, no apologies, just the answer."
272
+ )
273
+ else:
274
+ instructions = (
275
+ "Write a 2-4 sentence rationale a researcher could paste into "
276
+ "a paper. Avoid hedging; be specific about the numbers. "
277
+ "Respond with the rationale only, no preamble."
278
+ )
279
  return (
280
+ f"{header}\n\n"
 
 
 
281
  f"User question: {user_q}\n\n"
282
+ f"Data for this prediction:\n{body_lines[0]}\n\n"
283
+ f"{instructions}"
284
  )
285
 
286