Update README for v13b
Browse files
README.md
CHANGED
|
@@ -11,26 +11,38 @@ tags:
|
|
| 11 |
|
| 12 |
PPO-trained agent for [OpenFront.io](https://openfront.io), a multiplayer territory control game.
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
## Training Details
|
| 15 |
|
| 16 |
- **Algorithm:** PPO (Proximal Policy Optimization)
|
| 17 |
- **Architecture:** Actor-Critic with shared backbone (512→512→256)
|
| 18 |
-
- **Observation dim:** 80
|
| 19 |
-
- **
|
| 20 |
-
- **Maps:** plains, big_plains, world, giantworldmap, ocean_and_land, half_land_half_ocean
|
| 21 |
-
- **
|
| 22 |
-
- **
|
| 23 |
-
- **
|
| 24 |
-
- **
|
| 25 |
-
- **
|
| 26 |
-
- **
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
- **
|
| 32 |
-
- **
|
| 33 |
-
- **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
## Usage
|
| 36 |
|
|
@@ -39,7 +51,8 @@ from train import ActorCritic
|
|
| 39 |
import torch
|
| 40 |
|
| 41 |
model = ActorCritic(obs_dim=80, max_neighbors=16, hidden_sizes=[512, 512, 256])
|
| 42 |
-
|
|
|
|
| 43 |
model.eval()
|
| 44 |
```
|
| 45 |
|
|
|
|
| 11 |
|
| 12 |
PPO-trained agent for [OpenFront.io](https://openfront.io), a multiplayer territory control game.
|
| 13 |
|
| 14 |
+
## Model Version: v13b
|
| 15 |
+
|
| 16 |
+
Current best model trained with normalized elimination reward and winner bonus.
|
| 17 |
+
|
| 18 |
## Training Details
|
| 19 |
|
| 20 |
- **Algorithm:** PPO (Proximal Policy Optimization)
|
| 21 |
- **Architecture:** Actor-Critic with shared backbone (512→512→256)
|
| 22 |
+
- **Observation dim:** 80 (16 player stats + 16 neighbors × 4 features)
|
| 23 |
+
- **Action space:** MultiDiscrete [17 action types, 16 targets, 5 troop fractions]
|
| 24 |
+
- **Maps:** plains, big_plains, world, giantworldmap, ocean_and_land, half_land_half_ocean (random per episode)
|
| 25 |
+
- **Parallel envs:** 16
|
| 26 |
+
- **Learning rate:** 1.5e-4 (constant)
|
| 27 |
+
- **Rollout steps:** 1024
|
| 28 |
+
- **Batch size:** 16,384
|
| 29 |
+
- **Value function coefficient:** 0.5
|
| 30 |
+
- **Updates trained:** 1550 (ongoing)
|
| 31 |
+
|
| 32 |
+
## Reward Design (v13)
|
| 33 |
+
|
| 34 |
+
Normalized elimination reward — total reward sums to +1.0 on a full win regardless of opponent count:
|
| 35 |
+
- **Per-kill:** `+1/N` per opponent eliminated (N = starting opponents)
|
| 36 |
+
- **Winner bonus:** remaining alive opponents credited as `aliveCount/N` when `game.getWinner()` fires
|
| 37 |
+
- **Death penalty:** -1.0
|
| 38 |
+
|
| 39 |
+
## Curriculum
|
| 40 |
+
|
| 41 |
+
Win-rate-gated 12-stage curriculum advancing through Easy → Medium → Hard difficulty and 2 → 15 opponents. Stages advance only when rolling win rate exceeds per-stage threshold (75% down to 45%) over 200 episodes.
|
| 42 |
+
|
| 43 |
+
## Eval Results
|
| 44 |
+
|
| 45 |
+
- **Easy/2 opponents:** 100% win rate (20/20 games)
|
| 46 |
|
| 47 |
## Usage
|
| 48 |
|
|
|
|
| 51 |
import torch
|
| 52 |
|
| 53 |
model = ActorCritic(obs_dim=80, max_neighbors=16, hidden_sizes=[512, 512, 256])
|
| 54 |
+
checkpoint = torch.load("best_model.pt", map_location="cpu", weights_only=False)
|
| 55 |
+
model.load_state_dict(checkpoint["model_state_dict"])
|
| 56 |
model.eval()
|
| 57 |
```
|
| 58 |
|