BA Agent Post-Training Experiment
This directory is a standalone experiment workspace copied out of the main training path.
Training objective:
- Keep
data_insightisolated from long-chain planning and report writing. - Reuse one shared base model with role-specific LoRA adapters.
- Optimize evidence-grounded analysis and conclusion reliability before attempting long-report end-to-end RL.
Recommended order:
- Train
data_insightwith SFT. - Train
writerwith SFT. - Run
writerDPO and reward modeling on BA preference data. - Optionally run PPO with the reward adapter on prompt-only tasks.
Supported SFT schema:
{"system":"optional role prompt","prompt":"question or analysis context","response":"target answer"}
Supported automatically:
instruction+input+outputquestion+answermessages/conversationswhere the final assistant turn is the target
Supported DPO / reward schema:
{"system":"optional role prompt","prompt":"same prompt shown to both candidates","chosen":"preferred answer","rejected":"dispreferred answer"}
Supported automatically:
question+response_chosen+response_rejectedAnthropic/hh-rlhfstylechosen/rejectedPKU-Alignment/PKU-SafeRLHF-*style pairwise columns
Supported PPO prompt schema:
{"system":"optional role prompt","prompt":"generation prompt only"}
Suggested role split:
data_insight: facts, supported insights, evidence refs, uncertainty onlywriter: briefs, chatbot answers, and section drafts that consume structured evidence
Files in this experiment:
utils.py: local copy of shared training helpers and argumentsdata_adapter.py: local schema normalizer for SFT / DPO / reward / PPOsft.py,dpo.py,reward_model.py,ppo_multi_adapter.py: experiment training entrypointsmerge_adapter.py,ma_ppo_config.py,ma_ppo_trainer.py: copied dependencies needed by this experimentdata/*.sample.jsonl: schema examples and smoke-test inputsscripts/run_ba_role_*.sh: standalone run scripts
Quick start:
bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_sft.sh \
ROLE_NAME=data_insight \
SFT_DATASET_NAME=./experiments/ba_agent_posttrain/data/data_insight_sft.sample.jsonl
bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_sft.sh \
ROLE_NAME=writer \
SFT_DATASET_NAME=./experiments/ba_agent_posttrain/data/writer_sft.sample.jsonl
bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_dpo.sh \
ROLE_NAME=writer \
PREFERENCE_DATASET_NAME=./experiments/ba_agent_posttrain/data/writer_preference.sample.jsonl
For real runs, replace the sample files with your own:
data_insight_sft.jsonlwriter_sft.jsonlwriter_preference.jsonlwriter_prompts.jsonl