akseljoonas HF Staff commited on
Commit
a2e8010
·
1 Parent(s): 0a9e96d

Strip provider_specific_fields from research sub-agent messages

Browse files

LiteLLM's response Message carries internal fields (provider_specific_fields,
reasoning_content) that the HF router's strict OpenAI schema rejects when
echoed back in a follow-up request. Rebuild the assistant Message with only
role / content / tool_calls before appending to the research history.

Reproduced on MiniMaxAI/MiniMax-M2.7 (the reported failure) and verified the
fix across MiniMax, Kimi, GLM, Claude Opus, and GPT-5 in a multi-turn
tool-calling loop.

Files changed (1) hide show
  1. agent/tools/research_tool.py +10 -2
agent/tools/research_tool.py CHANGED
@@ -351,8 +351,16 @@ async def research_handler(
351
  content = msg.content or "Research completed but no summary generated."
352
  return content, True
353
 
354
- # Execute tool calls and add results
355
- messages.append(msg)
 
 
 
 
 
 
 
 
356
  for tc in msg.tool_calls:
357
  try:
358
  tool_args = json.loads(tc.function.arguments)
 
351
  content = msg.content or "Research completed but no summary generated."
352
  return content, True
353
 
354
+ # Execute tool calls and add results.
355
+ # Rebuild the assistant message with only the wire-safe fields —
356
+ # LiteLLM's raw Message carries `provider_specific_fields` and
357
+ # `reasoning_content`, which the HF router's OpenAI schema rejects
358
+ # if we echo them back in the next request.
359
+ messages.append(Message(
360
+ role="assistant",
361
+ content=msg.content,
362
+ tool_calls=msg.tool_calls,
363
+ ))
364
  for tc in msg.tool_calls:
365
  try:
366
  tool_args = json.loads(tc.function.arguments)