richingme's picture
Upload BA agent post-training scripts
f8f0e4e verified

BA Agent Post-Training Experiment

This directory is a standalone experiment workspace copied out of the main training path.

Training objective:

  • Keep data_insight isolated from long-chain planning and report writing.
  • Reuse one shared base model with role-specific LoRA adapters.
  • Optimize evidence-grounded analysis and conclusion reliability before attempting long-report end-to-end RL.

Recommended order:

  1. Train data_insight with SFT.
  2. Train writer with SFT.
  3. Run writer DPO and reward modeling on BA preference data.
  4. Optionally run PPO with the reward adapter on prompt-only tasks.

Supported SFT schema:

{"system":"optional role prompt","prompt":"question or analysis context","response":"target answer"}

Supported automatically:

  • instruction + input + output
  • question + answer
  • messages / conversations where the final assistant turn is the target

Supported DPO / reward schema:

{"system":"optional role prompt","prompt":"same prompt shown to both candidates","chosen":"preferred answer","rejected":"dispreferred answer"}

Supported automatically:

  • question + response_chosen + response_rejected
  • Anthropic/hh-rlhf style chosen / rejected
  • PKU-Alignment/PKU-SafeRLHF-* style pairwise columns

Supported PPO prompt schema:

{"system":"optional role prompt","prompt":"generation prompt only"}

Suggested role split:

  • data_insight: facts, supported insights, evidence refs, uncertainty only
  • writer: briefs, chatbot answers, and section drafts that consume structured evidence

Files in this experiment:

  • utils.py: local copy of shared training helpers and arguments
  • data_adapter.py: local schema normalizer for SFT / DPO / reward / PPO
  • sft.py, dpo.py, reward_model.py, ppo_multi_adapter.py: experiment training entrypoints
  • merge_adapter.py, ma_ppo_config.py, ma_ppo_trainer.py: copied dependencies needed by this experiment
  • data/*.sample.jsonl: schema examples and smoke-test inputs
  • scripts/run_ba_role_*.sh: standalone run scripts

Quick start:

bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_sft.sh \
  ROLE_NAME=data_insight \
  SFT_DATASET_NAME=./experiments/ba_agent_posttrain/data/data_insight_sft.sample.jsonl

bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_sft.sh \
  ROLE_NAME=writer \
  SFT_DATASET_NAME=./experiments/ba_agent_posttrain/data/writer_sft.sample.jsonl

bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_dpo.sh \
  ROLE_NAME=writer \
  PREFERENCE_DATASET_NAME=./experiments/ba_agent_posttrain/data/writer_preference.sample.jsonl

For real runs, replace the sample files with your own:

  • data_insight_sft.jsonl
  • writer_sft.jsonl
  • writer_preference.jsonl
  • writer_prompts.jsonl