Spaces:
Sleeping
Sleeping
File size: 3,256 Bytes
ba1d0d4 7a316cd ba1d0d4 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 6f90d54 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 16229c6 a8ffe4c 16229c6 a8ffe4c 16229c6 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f 11f9523 0eb4f6f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | ---
title: Container Port
colorFrom: gray
colorTo: blue
sdk: docker
pinned: false
license: mit
short_description: A simulation-based RL environment for container management
---
# Container Port Environment
An OpenEnv environment for container-yard stack planning at a shipping terminal.
## Task
Incoming containers have priority `1`, `2`, or `3`. The agent places each one into a bounded stack. During retrieval, every container sitting above the target counts as a rehandle and adds cost.
Goal: minimize total rehandles across the episode.
## Difficulty Levels
| Parameter | Easy | Medium | Hard |
|---|---|---|---|
| Stacks | 6 | 8 | 10 |
| Max height | 4 | 5 | 6 |
| Containers | 20 | 35 | 50 |
| Retrieval interval | 5 | 5 | 4 |
| Lookahead | 5 | 3 | 0 |
## Run Locally
```bash
pip install -e .
uvicorn server.app:app --host 0.0.0.0 --port 7860
```
Web UI: `http://127.0.0.1:7860/web`
Interactive dashboard with difficulty dropdown: `http://127.0.0.1:7860/dashboard`
For manual stateful checks, use the web endpoints:
```bash
curl http://127.0.0.1:7860/health
curl -X POST http://127.0.0.1:7860/web/reset -H "Content-Type: application/json" -d "{\"difficulty\":\"easy\"}"
curl -X POST http://127.0.0.1:7860/web/step -H "Content-Type: application/json" -d "{\"action\":{\"stack_index\":0}}"
```
`/reset` and `/step` are stateless simulation endpoints in `openenv-core 0.2.3`. For browser-style interactive testing, use `/web`, `/web/reset`, `/web/step`, or the WebSocket flow used by `inference.py`.
## Run Inference
```bash
python inference.py --difficulty all
python inference.py --difficulty easy
python inference.py --url http://127.0.0.1:7860 --difficulty all
```
LLM mode is enabled by default in `inference.py` and requires:
```bash
export API_BASE_URL="https://api.openai.com/v1" # or validator-provided proxy URL
export HF_TOKEN="your-validator-provided-token"
```
`MODEL_NAME` is optional and defaults to `meta-llama/Llama-3.1-8B-Instruct`.
`HF_TOKEN` is required by this script.
To run greedy mode locally without LLM calls:
```bash
python inference.py --no-llm
```
## Docker
```bash
docker build -t container-port-env .
docker run -p 7860:7860 container-port-env
```
## Tests
Run the full test suite:
```bash
pytest tests/test_openenv_env.py -v
```
| Test | What it covers |
|---|---|
| test_reset_returns_valid_obs | Reset returns correct stack count, step=0, no rehandles |
| test_step_valid_action | Valid placement increments step and fills stack |
| test_step_invalid_action_penalized | Out-of-range stack index returns -2.0 reward |
| test_score_in_range | Full episode score stays in [0.0, 1.0] |
| test_full_episode_completes | All 3 difficulties reach done=True within 500 steps |
| test_lookahead_visibility | Easy shows more upcoming retrievals than hard (hard=0) |
| test_reward_is_dense | At least 50% of steps have non-zero reward |
| test_no_double_retrieval | retrieval_pointer never exceeds queue length |
| test_health_route | GET /health returns 200 |
| test_web_ui_route | GET /web returns 200 (Gradio UI) |
| test_http_reset_returns_observation | POST /reset returns valid easy-mode observation |
| test_http_reset_then_step_preserves_state | Sequential reset+step operates on same episode |
|