h1manshu commited on
Commit
09ec238
·
verified ·
1 Parent(s): 0f13ee5

Upload folder using huggingface_hub

Browse files
Dockerfile ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ # Multi-stage build using openenv-base
8
+ # This Dockerfile is flexible and works for both:
9
+ # - In-repo environments (with local OpenEnv sources)
10
+ # - Standalone environments (with openenv from PyPI/Git)
11
+ # The build script (openenv build) handles context detection and sets appropriate build args.
12
+
13
+ ARG BASE_IMAGE=ghcr.io/meta-pytorch/openenv-base:latest
14
+ FROM ${BASE_IMAGE} AS builder
15
+
16
+ WORKDIR /app
17
+
18
+ # Ensure git is available (required for installing dependencies from VCS)
19
+ RUN apt-get update && \
20
+ apt-get install -y --no-install-recommends git && \
21
+ rm -rf /var/lib/apt/lists/*
22
+
23
+ # Build argument to control whether we're building standalone or in-repo
24
+ ARG BUILD_MODE=in-repo
25
+ ARG ENV_NAME=code_review
26
+
27
+ # Copy environment code (always at root of build context)
28
+ COPY . /app/env
29
+
30
+ # For in-repo builds, openenv is already vendored in the build context
31
+ # For standalone builds, openenv will be installed via pyproject.toml
32
+ WORKDIR /app/env
33
+
34
+ # Ensure uv is available (for local builds where base image lacks it)
35
+ RUN if ! command -v uv >/dev/null 2>&1; then \
36
+ curl -LsSf https://astral.sh/uv/install.sh | sh && \
37
+ mv /root/.local/bin/uv /usr/local/bin/uv && \
38
+ mv /root/.local/bin/uvx /usr/local/bin/uvx; \
39
+ fi
40
+
41
+ # Install dependencies using uv sync
42
+ # If uv.lock exists, use it; otherwise resolve on the fly
43
+ RUN --mount=type=cache,target=/root/.cache/uv \
44
+ if [ -f uv.lock ]; then \
45
+ uv sync --frozen --no-install-project --no-editable; \
46
+ else \
47
+ uv sync --no-install-project --no-editable; \
48
+ fi
49
+
50
+ RUN --mount=type=cache,target=/root/.cache/uv \
51
+ if [ -f uv.lock ]; then \
52
+ uv sync --frozen --no-editable; \
53
+ else \
54
+ uv sync --no-editable; \
55
+ fi
56
+
57
+ # Final runtime stage
58
+ FROM ${BASE_IMAGE}
59
+
60
+ WORKDIR /app
61
+
62
+ # Copy the virtual environment from builder
63
+ COPY --from=builder /app/env/.venv /app/.venv
64
+
65
+ # Copy the environment code
66
+ COPY --from=builder /app/env /app/env
67
+
68
+ # Set PATH to use the virtual environment
69
+ ENV PATH="/app/.venv/bin:$PATH"
70
+
71
+ # Set PYTHONPATH so imports work correctly
72
+ ENV PYTHONPATH="/app/env:$PYTHONPATH"
73
+
74
+ # Health check
75
+ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
76
+ CMD curl -f http://localhost:8000/health || exit 1
77
+
78
+ # Run the FastAPI server
79
+ # The module path is constructed to work with the /app/env structure
80
+ ENV ENABLE_WEB_INTERFACE=true
81
+ CMD ["sh", "-c", "cd /app/env && uvicorn server.app:app --host 0.0.0.0 --port 8000"]
README.md CHANGED
@@ -1,10 +1,255 @@
1
  ---
2
- title: Code Review
3
- emoji: 🌖
4
- colorFrom: gray
5
- colorTo: indigo
6
  sdk: docker
7
  pinned: false
 
 
 
 
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Code Review Environment Server
3
+ emoji: 🎳
4
+ colorFrom: green
5
+ colorTo: gray
6
  sdk: docker
7
  pinned: false
8
+ app_port: 8000
9
+ base_path: /web
10
+ tags:
11
+ - openenv
12
  ---
13
 
14
+ # Code Review Environment
15
+
16
+ A simple test environment that echoes back messages. Perfect for testing the env APIs as well as demonstrating environment usage patterns.
17
+
18
+ ## Quick Start
19
+
20
+ The simplest way to use the Code Review environment is through the `CodeReviewEnv` class:
21
+
22
+ ```python
23
+ from code_review import CodeReviewAction, CodeReviewEnv
24
+
25
+ try:
26
+ # Create environment from Docker image
27
+ code_reviewenv = CodeReviewEnv.from_docker_image("code_review-env:latest")
28
+
29
+ # Reset
30
+ result = code_reviewenv.reset()
31
+ print(f"Reset: {result.observation.echoed_message}")
32
+
33
+ # Send multiple messages
34
+ messages = ["Hello, World!", "Testing echo", "Final message"]
35
+
36
+ for msg in messages:
37
+ result = code_reviewenv.step(CodeReviewAction(message=msg))
38
+ print(f"Sent: '{msg}'")
39
+ print(f" → Echoed: '{result.observation.echoed_message}'")
40
+ print(f" → Length: {result.observation.message_length}")
41
+ print(f" → Reward: {result.reward}")
42
+
43
+ finally:
44
+ # Always clean up
45
+ code_reviewenv.close()
46
+ ```
47
+
48
+ That's it! The `CodeReviewEnv.from_docker_image()` method handles:
49
+ - Starting the Docker container
50
+ - Waiting for the server to be ready
51
+ - Connecting to the environment
52
+ - Container cleanup when you call `close()`
53
+
54
+ ## Building the Docker Image
55
+
56
+ Before using the environment, you need to build the Docker image:
57
+
58
+ ```bash
59
+ # From project root
60
+ docker build -t code_review-env:latest -f server/Dockerfile .
61
+ ```
62
+
63
+ ## Deploying to Hugging Face Spaces
64
+
65
+ You can easily deploy your OpenEnv environment to Hugging Face Spaces using the `openenv push` command:
66
+
67
+ ```bash
68
+ # From the environment directory (where openenv.yaml is located)
69
+ openenv push
70
+
71
+ # Or specify options
72
+ openenv push --namespace my-org --private
73
+ ```
74
+
75
+ The `openenv push` command will:
76
+ 1. Validate that the directory is an OpenEnv environment (checks for `openenv.yaml`)
77
+ 2. Prepare a custom build for Hugging Face Docker space (enables web interface)
78
+ 3. Upload to Hugging Face (ensuring you're logged in)
79
+
80
+ ### Prerequisites
81
+
82
+ - Authenticate with Hugging Face: The command will prompt for login if not already authenticated
83
+
84
+ ### Options
85
+
86
+ - `--directory`, `-d`: Directory containing the OpenEnv environment (defaults to current directory)
87
+ - `--repo-id`, `-r`: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)
88
+ - `--base-image`, `-b`: Base Docker image to use (overrides Dockerfile FROM)
89
+ - `--private`: Deploy the space as private (default: public)
90
+
91
+ ### Examples
92
+
93
+ ```bash
94
+ # Push to your personal namespace (defaults to username/env-name from openenv.yaml)
95
+ openenv push
96
+
97
+ # Push to a specific repository
98
+ openenv push --repo-id my-org/my-env
99
+
100
+ # Push with a custom base image
101
+ openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
102
+
103
+ # Push as a private space
104
+ openenv push --private
105
+
106
+ # Combine options
107
+ openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
108
+ ```
109
+
110
+ After deployment, your space will be available at:
111
+ `https://huggingface.co/spaces/<repo-id>`
112
+
113
+ The deployed space includes:
114
+ - **Web Interface** at `/web` - Interactive UI for exploring the environment
115
+ - **API Documentation** at `/docs` - Full OpenAPI/Swagger interface
116
+ - **Health Check** at `/health` - Container health monitoring
117
+ - **WebSocket** at `/ws` - Persistent session endpoint for low-latency interactions
118
+
119
+ ## Environment Details
120
+
121
+ ### Action
122
+ **CodeReviewAction**: Contains a single field
123
+ - `message` (str) - The message to echo back
124
+
125
+ ### Observation
126
+ **CodeReviewObservation**: Contains the echo response and metadata
127
+ - `echoed_message` (str) - The message echoed back
128
+ - `message_length` (int) - Length of the message
129
+ - `reward` (float) - Reward based on message length (length × 0.1)
130
+ - `done` (bool) - Always False for echo environment
131
+ - `metadata` (dict) - Additional info like step count
132
+
133
+ ### Reward
134
+ The reward is calculated as: `message_length × 0.1`
135
+ - "Hi" → reward: 0.2
136
+ - "Hello, World!" → reward: 1.3
137
+ - Empty message → reward: 0.0
138
+
139
+ ## Advanced Usage
140
+
141
+ ### Connecting to an Existing Server
142
+
143
+ If you already have a Code Review environment server running, you can connect directly:
144
+
145
+ ```python
146
+ from code_review import CodeReviewEnv
147
+
148
+ # Connect to existing server
149
+ code_reviewenv = CodeReviewEnv(base_url="<ENV_HTTP_URL_HERE>")
150
+
151
+ # Use as normal
152
+ result = code_reviewenv.reset()
153
+ result = code_reviewenv.step(CodeReviewAction(message="Hello!"))
154
+ ```
155
+
156
+ Note: When connecting to an existing server, `code_reviewenv.close()` will NOT stop the server.
157
+
158
+ ### Using the Context Manager
159
+
160
+ The client supports context manager usage for automatic connection management:
161
+
162
+ ```python
163
+ from code_review import CodeReviewAction, CodeReviewEnv
164
+
165
+ # Connect with context manager (auto-connects and closes)
166
+ with CodeReviewEnv(base_url="http://localhost:8000") as env:
167
+ result = env.reset()
168
+ print(f"Reset: {result.observation.echoed_message}")
169
+ # Multiple steps with low latency
170
+ for msg in ["Hello", "World", "!"]:
171
+ result = env.step(CodeReviewAction(message=msg))
172
+ print(f"Echoed: {result.observation.echoed_message}")
173
+ ```
174
+
175
+ The client uses WebSocket connections for:
176
+ - **Lower latency**: No HTTP connection overhead per request
177
+ - **Persistent session**: Server maintains your environment state
178
+ - **Efficient for episodes**: Better for many sequential steps
179
+
180
+ ### Concurrent WebSocket Sessions
181
+
182
+ The server supports multiple concurrent WebSocket connections. To enable this,
183
+ modify `server/app.py` to use factory mode:
184
+
185
+ ```python
186
+ # In server/app.py - use factory mode for concurrent sessions
187
+ app = create_app(
188
+ CodeReviewEnvironment, # Pass class, not instance
189
+ CodeReviewAction,
190
+ CodeReviewObservation,
191
+ max_concurrent_envs=4, # Allow 4 concurrent sessions
192
+ )
193
+ ```
194
+
195
+ Then multiple clients can connect simultaneously:
196
+
197
+ ```python
198
+ from code_review import CodeReviewAction, CodeReviewEnv
199
+ from concurrent.futures import ThreadPoolExecutor
200
+
201
+ def run_episode(client_id: int):
202
+ with CodeReviewEnv(base_url="http://localhost:8000") as env:
203
+ result = env.reset()
204
+ for i in range(10):
205
+ result = env.step(CodeReviewAction(message=f"Client {client_id}, step {i}"))
206
+ return client_id, result.observation.message_length
207
+
208
+ # Run 4 episodes concurrently
209
+ with ThreadPoolExecutor(max_workers=4) as executor:
210
+ results = list(executor.map(run_episode, range(4)))
211
+ ```
212
+
213
+ ## Development & Testing
214
+
215
+ ### Direct Environment Testing
216
+
217
+ Test the environment logic directly without starting the HTTP server:
218
+
219
+ ```bash
220
+ # From the server directory
221
+ python3 server/code_review_environment.py
222
+ ```
223
+
224
+ This verifies that:
225
+ - Environment resets correctly
226
+ - Step executes actions properly
227
+ - State tracking works
228
+ - Rewards are calculated correctly
229
+
230
+ ### Running Locally
231
+
232
+ Run the server locally for development:
233
+
234
+ ```bash
235
+ uvicorn server.app:app --reload
236
+ ```
237
+
238
+ ## Project Structure
239
+
240
+ ```
241
+ code_review/
242
+ ├── .dockerignore # Docker build exclusions
243
+ ├── __init__.py # Module exports
244
+ ├── README.md # This file
245
+ ├── openenv.yaml # OpenEnv manifest
246
+ ├── pyproject.toml # Project metadata and dependencies
247
+ ├── uv.lock # Locked dependencies (generated)
248
+ ├── client.py # CodeReviewEnv client
249
+ ├── models.py # Action and Observation models
250
+ └── server/
251
+ ├── __init__.py # Server module exports
252
+ ├── code_review_environment.py # Core environment logic
253
+ ├── app.py # FastAPI application (HTTP + WebSocket endpoints)
254
+ └── Dockerfile # Container image definition
255
+ ```
__init__.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Code Review Environment."""
8
+
9
+ from .client import CodeReviewEnv
10
+ from .models import CodeReviewAction, CodeReviewObservation
11
+ from .server.code_review_environment import CodeReviewEnvironment
12
+
13
+ __all__ = [
14
+ "CodeReviewAction",
15
+ "CodeReviewObservation",
16
+ "CodeReviewEnvironment",
17
+ ]
client.py ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Code Review Environment Client."""
8
+
9
+ from typing import Dict
10
+
11
+ from openenv.core import EnvClient
12
+ from openenv.core.client_types import StepResult
13
+ from openenv.core.env_server.types import State
14
+
15
+ from .models import CodeReviewAction, CodeReviewObservation, CodeReviewReward , CodeReviewPullRequest
16
+
17
+
18
+ class CodeReviewEnv(
19
+ EnvClient[CodeReviewAction, CodeReviewObservation, State]
20
+ ):
21
+ """
22
+ Client for the Code Review Environment.
23
+
24
+ This client maintains a persistent WebSocket connection to the environment server,
25
+ enabling efficient multi-step interactions with lower latency.
26
+ Each client instance has its own dedicated environment session on the server.
27
+
28
+ Example:
29
+ >>> # Connect to a running server
30
+ >>> with CodeReviewEnv(base_url="http://localhost:8000") as client:
31
+ ... result = client.reset()
32
+ ... print(result.observation.echoed_message)
33
+ ...
34
+ ... result = client.step(CodeReviewAction(message="Hello!"))
35
+ ... print(result.observation.echoed_message)
36
+
37
+ Example with Docker:
38
+ >>> # Automatically start container and connect
39
+ >>> client = CodeReviewEnv.from_docker_image("code_review-env:latest")
40
+ >>> try:
41
+ ... result = client.reset()
42
+ ... result = client.step(CodeReviewAction(message="Test"))
43
+ ... finally:
44
+ ... client.close()
45
+ """
46
+
47
+ def _step_payload(self, action: CodeReviewAction) -> Dict:
48
+ # print("Action == ", action)
49
+
50
+ # Handle dict input
51
+ if isinstance(action, dict):
52
+ act = {
53
+ "action_type": action.get("action_type"),
54
+ "comment": action.get("comment"),
55
+ "suggested_code": action.get("suggested_code"),
56
+ "decision": action.get("decision"),
57
+ }
58
+ else:
59
+ act = {
60
+ "action_type": action.action_type,
61
+ "comment": action.comment,
62
+ "suggested_code": action.suggested_code,
63
+ "decision": action.decision,
64
+ }
65
+
66
+ # print("Act == ", act)
67
+ return act
68
+
69
+ def _parse_result(self, payload: Dict) -> StepResult[CodeReviewObservation]:
70
+ """
71
+ Parse server response into StepResult[CodeReviewObservation].
72
+
73
+ Args:
74
+ payload: JSON response data from server
75
+
76
+ Returns:
77
+ StepResult with CodeReviewObservation
78
+ """
79
+
80
+ """
81
+ return CodeReviewObservation(
82
+ #echoed_message="Code Review environment ready!",
83
+ pr=self.pr,
84
+ previous_comments=self.history,
85
+ step_count=self.step_count,
86
+ max_steps=self.max_steps,
87
+ reward=0.0,
88
+ done=False,
89
+ )
90
+ """
91
+ # print("Payload ====== ", payload)
92
+
93
+
94
+ obs_data = payload.get("observation") or {}
95
+
96
+ if "observation" in obs_data: # nested case
97
+ obs_data = obs_data["observation"]
98
+
99
+
100
+
101
+
102
+ if not obs_data or "pr" not in obs_data:
103
+ raise ValueError(f"Invalid observation payload: {payload}")
104
+
105
+
106
+ pr_data = obs_data["pr"]
107
+
108
+ observation = CodeReviewObservation(
109
+ pr=CodeReviewPullRequest(**pr_data),
110
+ previous_comments=obs_data.get("previous_comments") or [],
111
+ step_count=obs_data.get("step_count", 0),
112
+ max_steps=obs_data.get("max_steps", 3),
113
+ )
114
+
115
+ # Handle reward (reset vs step)
116
+ reward_data = payload.get("reward")
117
+ reward = None
118
+
119
+ if isinstance(reward_data, dict):
120
+ reward = CodeReviewReward(**reward_data)
121
+ # else: float/None → ignore (reset case)
122
+
123
+ return StepResult(
124
+ observation=observation,
125
+ reward=reward,
126
+ done=payload.get("done", False),
127
+ )
128
+
129
+ def _parse_state(self, payload: Dict) -> State:
130
+ """
131
+ Parse server response into State object.
132
+
133
+ Args:
134
+ payload: JSON response from state request
135
+
136
+ Returns:
137
+ State object with episode_id and step_count
138
+ """
139
+ return State(
140
+ episode_id=payload.get("episode_id"),
141
+ step_count=payload.get("step_count", 0),
142
+ )
dataset/dataset.json ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+
3
+ {
4
+ "task_type": "easy",
5
+ "pr": {
6
+ "id": "2",
7
+ "title": "Missing import",
8
+ "description": "Forgot to import module",
9
+ "language": "python",
10
+ "diffs": [
11
+ {
12
+ "file_name": "main.py",
13
+ "diff": "print(datetime.now())"
14
+ }
15
+ ]
16
+ },
17
+ "ground_truth": {
18
+ "issues": ["missing import datetime"],
19
+ "decision": "reject",
20
+ "fix": "from datetime import datetime"
21
+ }
22
+ },
23
+ {
24
+ "task_type": "medium",
25
+ "pr": {
26
+ "id": "3",
27
+ "title": "Division function",
28
+ "description": "Handles division",
29
+ "language": "python",
30
+ "diffs": [
31
+ {
32
+ "file_name": "math.py",
33
+ "diff": "def divide(a,b): return a/b"
34
+ }
35
+ ]
36
+ },
37
+ "ground_truth": {
38
+ "issues": ["division by zero"],
39
+ "decision": "reject",
40
+ "fix": "if b == 0: return None"
41
+ }
42
+ },
43
+ {
44
+ "task_type": "medium",
45
+ "pr": {
46
+ "id": "4",
47
+ "title": "Inefficient loop",
48
+ "description": "Optimizing search",
49
+ "language": "python",
50
+ "diffs": [
51
+ {
52
+ "file_name": "search.py",
53
+ "diff": "for i in range(len(arr)):\n if arr[i] == target:\n return True"
54
+ }
55
+ ]
56
+ },
57
+ "ground_truth": {
58
+ "issues": ["inefficient loop"],
59
+ "decision": "approve",
60
+ "fix": "use 'if target in arr'"
61
+ }
62
+ },
63
+
64
+ {
65
+ "task_type": "hard",
66
+ "pr": {
67
+ "id": "6",
68
+ "title": "Authentication logic",
69
+ "description": "Adds login system",
70
+ "language": "python",
71
+ "diffs": [
72
+ {
73
+ "file_name": "auth.py",
74
+ "diff": "def login(password):\n if password == 'admin123':\n return True"
75
+ }
76
+ ]
77
+ },
78
+ "ground_truth": {
79
+ "issues": ["hardcoded password", "security vulnerability"],
80
+ "decision": "reject",
81
+ "fix": "use hashed password comparison"
82
+ }
83
+ },
84
+ {
85
+ "task_type": "hard",
86
+ "pr": {
87
+ "id": "7",
88
+ "title": "SQL query issue",
89
+ "description": "Fetch user data",
90
+ "language": "python",
91
+ "diffs": [
92
+ {
93
+ "file_name": "db.py",
94
+ "diff": "query = \"SELECT * FROM users WHERE id = \" + user_id"
95
+ }
96
+ ]
97
+ },
98
+ "ground_truth": {
99
+ "issues": ["sql injection"],
100
+ "decision": "reject",
101
+ "fix": "use parameterized queries"
102
+ }
103
+ },
104
+ {
105
+ "task_type": "hard",
106
+ "pr": {
107
+ "id": "8",
108
+ "title": "Cross-file null bug",
109
+ "description": "User fetch logic",
110
+ "language": "python",
111
+ "diffs": [
112
+ {
113
+ "file_name": "service.py",
114
+ "diff": "def get_user(id):\n return db[id]"
115
+ },
116
+ {
117
+ "file_name": "controller.py",
118
+ "diff": "user = get_user(None)"
119
+ }
120
+ ]
121
+ },
122
+ "ground_truth": {
123
+ "issues": ["invalid input", "null handling"],
124
+ "decision": "reject",
125
+ "fix": "validate id before calling get_user"
126
+ }
127
+ }
128
+
129
+ ]
inference.py ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Inference Script Example
3
+ ===================================
4
+ MANDATORY
5
+ - Before submitting, ensure the following variables are defined in your environment configuration:
6
+ API_BASE_URL The API endpoint for the LLM.
7
+ MODEL_NAME The model identifier to use for inference.
8
+ HF_TOKEN Your Hugging Face / API key.
9
+
10
+ - The inference script must be named `inference.py` and placed in the root directory of the project
11
+ - Participants must use OpenAI Client for all LLM calls using above variables
12
+ """
13
+
14
+ import os
15
+ import re
16
+ import base64
17
+ import textwrap
18
+ from io import BytesIO
19
+ from typing import List, Optional, Dict, Any
20
+
21
+ from openai import OpenAI
22
+ import numpy as np
23
+ import json
24
+ import asyncio
25
+
26
+ from code_review import CodeReviewAction, CodeReviewObservation
27
+ from code_review.client import CodeReviewEnv
28
+
29
+ API_BASE_URL = "https://router.huggingface.co/v1"
30
+ API_KEY = os.getenv("HF_TOKEN")
31
+ MODEL_NAME = os.getenv("MODEL_NAME")
32
+ MAX_STEPS = 3
33
+ TEMPERATURE = 0.2
34
+ MAX_TOKENS = 512
35
+
36
+ DEBUG = True
37
+ ACTION_PREFIX_RE = re.compile(
38
+ r"^(action|next action)\s*[:\-]\s*",
39
+ re.IGNORECASE,
40
+ )
41
+ ACTION_PATTERN = re.compile(r"[A-Za-z_]+\s*\(.*\)", re.DOTALL)
42
+
43
+ SYSTEM_PROMPT = textwrap.dedent(
44
+ """
45
+ You are a senior software engineer reviewing a pull request.
46
+
47
+ You MUST follow this workflow:
48
+
49
+ Step 1:
50
+ Identify all issues in the code.
51
+ List them clearly in the comment.
52
+
53
+ Step 2:
54
+ Provide a suggested fix with corrected code.
55
+
56
+ Step 3:
57
+ Make a final decision:
58
+ - reject if any bug, security risk, or incorrect logic exists
59
+ - approve only if the code is safe and correct
60
+
61
+ Rules:
62
+ - Mention every issue explicitly
63
+ - Use precise technical language
64
+ - Write detailed comments (>30 characters)
65
+
66
+ Return ONLY JSON:
67
+
68
+ {
69
+ "action_type": "comment | suggest_fix | final_decision",
70
+ "comment": "...",
71
+ "suggested_code": "...",
72
+ "decision": "approve | reject | null"
73
+ }
74
+ """
75
+ ).strip()
76
+
77
+
78
+ def log_start(task: str, env: str, model: str) -> None:
79
+ print(f"[START] task={task} env={env} model={model}", flush=True)
80
+
81
+
82
+ def log_step(
83
+ step: int, action: str, reward: float, done: bool, error: Optional[str]
84
+ ) -> None:
85
+ error_val = error if error else "null"
86
+ done_val = str(done).lower()
87
+ print(
88
+ f"[STEP] step={step} action={action} reward={reward:.2f} done={done_val} error={error_val}",
89
+ flush=True,
90
+ )
91
+
92
+
93
+ def log_end(success: bool, steps: int, score: float, rewards: List[float]) -> None:
94
+ rewards_str = ",".join(f"{r:.2f}" for r in rewards)
95
+ print(
96
+ f"[END] success={str(success).lower()} steps={steps} score={score:.3f} rewards={rewards_str}",
97
+ flush=True,
98
+ )
99
+
100
+
101
+ def build_history_lines(history: List[str]) -> str:
102
+ if not history:
103
+ return "None"
104
+ return "\n".join(history[-4:])
105
+
106
+
107
+ def safe_completion(client, messages):
108
+ for _ in range(3):
109
+ try:
110
+ return client.chat.completions.create(
111
+ model=MODEL_NAME,
112
+ messages=messages,
113
+ temperature=TEMPERATURE,
114
+ max_tokens=MAX_TOKENS,
115
+ )
116
+ except Exception as e:
117
+ print("Error during completion, retrying...")
118
+ print(e)
119
+ continue
120
+ return None
121
+
122
+
123
+ def build_prompt(step: int, max_steps: int, observation) -> str:
124
+ if step == 1:
125
+ instruction = (
126
+ "Carefully analyze the diff. List EVERY issue you find in the comment field. "
127
+ "Use exact technical terms (e.g. 'sql injection', 'null handling', 'hardcoded password'). "
128
+ "Set action_type to 'comment'."
129
+ "If the code looks correct with no issues, still output a comment like: 'No issues found. Code is clean.' and prepare to approve."
130
+ )
131
+ elif step == 2:
132
+ instruction = (
133
+ "Now provide the fix. Set action_type to 'suggest_fix'. "
134
+ "Write the corrected code in suggested_code. "
135
+ "Also repeat the issues in the comment field."
136
+ )
137
+ else:
138
+ instruction = (
139
+ "Make your final decision. Set action_type to 'final_decision'. "
140
+ "Set decision to 'reject' if any bug, security issue, or bad logic exists. "
141
+ "Set decision to 'approve' only if the code is clean and correct."
142
+ )
143
+
144
+ diff_text = "\n\n".join(
145
+ f"File: {d.file_name}\n{d.diff}" for d in observation.pr.diffs
146
+ )
147
+
148
+ return textwrap.dedent(
149
+ f"""
150
+ Step {step}/{max_steps}
151
+
152
+ Title: {observation.pr.title}
153
+ Description: {observation.pr.description}
154
+
155
+ Code Diffs:
156
+ {diff_text}
157
+
158
+ Previous Comments:
159
+ {build_history_lines(observation.previous_comments)}
160
+
161
+ Your task: {instruction}
162
+
163
+ Return ONLY valid JSON:
164
+ {{
165
+ "action_type": "...",
166
+ "comment": "...",
167
+ "suggested_code": "...",
168
+ "decision": "approve | reject | null"
169
+ }}
170
+ """
171
+ ).strip()
172
+
173
+
174
+ def fallback_action():
175
+ return {
176
+ "action_type": "comment",
177
+ "comment": "fallback: invalid response",
178
+ "suggested_code": None,
179
+ "decision": None,
180
+ }
181
+
182
+
183
+ def parse_action(text: str) -> Dict[str, Any]:
184
+ if not text:
185
+ return fallback_action()
186
+
187
+ text = text.strip().replace("```json", "").replace("```", "")
188
+
189
+ try:
190
+ return json.loads(text)
191
+ except Exception as e:
192
+ print(e)
193
+ return fallback_action()
194
+
195
+
196
+ async def run_episode(client, env):
197
+ result = await env.reset()
198
+
199
+ obs = result.observation
200
+
201
+ final_score = 0.0
202
+
203
+ for step in range(1, MAX_STEPS + 1):
204
+
205
+ prompt = build_prompt(step, MAX_STEPS, obs)
206
+
207
+ messages = [
208
+ {"role": "system", "content": SYSTEM_PROMPT},
209
+ {"role": "user", "content": prompt},
210
+ ]
211
+
212
+ completion = safe_completion(client, messages) # still sync
213
+ # print(completion)
214
+ if completion is None:
215
+ action = fallback_action()
216
+ else:
217
+ response_text = completion.choices[0].message.content or ""
218
+ action_dict = parse_action(response_text)
219
+
220
+ # print(response_text)
221
+
222
+ action = CodeReviewAction(
223
+ action_type=action_dict.get("action_type"),
224
+ comment=action_dict.get("comment"),
225
+ suggested_code=action_dict.get("suggested_code"),
226
+ decision=action_dict.get("decision"),
227
+ )
228
+
229
+ result = await env.step(action)
230
+ # print("Result === " , result)
231
+
232
+ obs = result.observation
233
+ reward = result.reward
234
+ done = result.done
235
+
236
+ final_score = max(final_score, reward.score if reward else 0.0)
237
+
238
+ print(f"Step {step} | Action: {action} | Reward: {reward}")
239
+
240
+ if done:
241
+ print(f"Done in {step} steps")
242
+ break
243
+
244
+ return final_score
245
+
246
+
247
+ async def main():
248
+ client = OpenAI(base_url=API_BASE_URL, api_key=API_KEY)
249
+
250
+ scores = []
251
+ # log_start(task=TASK_NAME, env=BENCHMARK, model=MODEL_NAME)
252
+
253
+ async with CodeReviewEnv(base_url="http://localhost:8000") as env:
254
+ NUM_EPISODES = 6
255
+
256
+ for i in range(NUM_EPISODES):
257
+ print(f"\n=== Episode {i+1} ===")
258
+ env.task_index = i
259
+
260
+ score = await run_episode(client, env)
261
+ scores.append(score)
262
+
263
+ print(f"Scores so far: {scores}")
264
+ # return 0
265
+
266
+ print("\nFinished all episodes")
267
+ print(f"Final Scores: {scores}")
268
+
269
+
270
+ if __name__ == "__main__":
271
+ asyncio.run(main())
models.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Data models for the Code Review Environment.
9
+
10
+ The code_review environment is a simple test environment that echoes back messages.
11
+ """
12
+
13
+ from openenv.core.env_server.types import Action, Observation
14
+ from pydantic import Field, BaseModel
15
+ from typing import Optional, List , Any , Dict
16
+
17
+ class CodeReviewAction(Action):
18
+ """Action for the Code Review environment - just a message to echo."""
19
+
20
+ # message: str = Field(..., description="Message to echo back")
21
+ action_type: str # comment / suggest_fix / final_decision
22
+ comment: Optional[str] = None
23
+ suggested_code: Optional[str] = None
24
+ decision: Optional[str] = None
25
+
26
+ class CodeDiff(BaseModel):
27
+ file_name: str
28
+ diff: str
29
+
30
+
31
+ class CodeReviewPullRequest(BaseModel):
32
+ id: str
33
+ title: str
34
+ description: str
35
+ diffs: List[CodeDiff]
36
+ language: str
37
+
38
+ class CodeReviewObservation(Observation):
39
+ """Observation from the Code Review environment - the echoed message."""
40
+
41
+ #echoed_message: str = Field(default="", description="The echoed message")
42
+ pr: CodeReviewPullRequest
43
+ previous_comments: List[str]
44
+ step_count: int
45
+ max_steps: int
46
+
47
+
48
+ class CodeReviewReward(BaseModel):
49
+ score: float
50
+ feedback: str
51
+
52
+ class CodeReviewStepResponse(BaseModel):
53
+ observation: CodeReviewObservation
54
+ reward: CodeReviewReward
55
+ done: bool
56
+ info: Dict[str, Any] = {}
openenv.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ spec_version: 1
2
+ name: code_review
3
+ type: space
4
+ runtime: fastapi
5
+ app: server.app:app
6
+ port: 8000
7
+
pyproject.toml ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ [build-system]
8
+ requires = ["setuptools>=45", "wheel"]
9
+ build-backend = "setuptools.build_meta"
10
+
11
+ [project]
12
+ name = "openenv-code_review"
13
+ version = "0.1.0"
14
+ description = "Code Review environment for OpenEnv"
15
+ requires-python = ">=3.10"
16
+ dependencies = [
17
+ # Core OpenEnv runtime (provides FastAPI server + HTTP client types)
18
+ # install from github
19
+ # "openenv-core[core] @ git+https://github.com/meta-pytorch/OpenEnv.git",
20
+ "openenv-core[core]>=0.2.1",
21
+ # Environment-specific dependencies
22
+ # Add all dependencies needed for your environment here
23
+ # Examples:
24
+ # "numpy>=1.19.0",
25
+ # "torch>=2.0.0",
26
+ # "gymnasium>=0.29.0",
27
+ # "openspiel>=1.0.0",
28
+ # "smolagents>=1.22.0,<2",
29
+ ]
30
+
31
+ [project.optional-dependencies]
32
+ dev = [
33
+ "pytest>=8.0.0",
34
+ "pytest-cov>=4.0.0",
35
+ ]
36
+
37
+ [project.scripts]
38
+ # Server entry point - enables running via: uv run --project . server
39
+ # or: python -m code_review.server.app
40
+ server = "code_review.server.app:main"
41
+
42
+ [tool.setuptools]
43
+ include-package-data = true
44
+ packages = ["code_review", "code_review.server"]
45
+ package-dir = { "code_review" = ".", "code_review.server" = "server" }
server/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Code Review environment server components."""
8
+
9
+ from .code_review_environment import CodeReviewEnvironment
10
+
11
+ __all__ = ["CodeReviewEnvironment"]
server/app.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ FastAPI application for the Code Review Environment.
9
+
10
+ This module creates an HTTP server that exposes the CodeReviewEnvironment
11
+ over HTTP and WebSocket endpoints, compatible with EnvClient.
12
+
13
+ Endpoints:
14
+ - POST /reset: Reset the environment
15
+ - POST /step: Execute an action
16
+ - GET /state: Get current environment state
17
+ - GET /schema: Get action/observation schemas
18
+ - WS /ws: WebSocket endpoint for persistent sessions
19
+
20
+ Usage:
21
+ # Development (with auto-reload):
22
+ uvicorn server.app:app --reload --host 0.0.0.0 --port 8000
23
+
24
+ # Production:
25
+ uvicorn server.app:app --host 0.0.0.0 --port 8000 --workers 4
26
+
27
+ # Or run directly:
28
+ python -m server.app
29
+ """
30
+
31
+ try:
32
+ from openenv.core.env_server.http_server import create_app
33
+ except Exception as e: # pragma: no cover
34
+ raise ImportError(
35
+ "openenv is required for the web interface. Install dependencies with '\n uv sync\n'"
36
+ ) from e
37
+
38
+ try:
39
+ from ..models import CodeReviewAction, CodeReviewObservation
40
+ from .code_review_environment import CodeReviewEnvironment
41
+ except ModuleNotFoundError:
42
+ from models import CodeReviewAction, CodeReviewObservation
43
+ from server.code_review_environment import CodeReviewEnvironment
44
+
45
+
46
+ # Create the app with web interface and README integration
47
+ app = create_app(
48
+ CodeReviewEnvironment,
49
+ CodeReviewAction,
50
+ CodeReviewObservation,
51
+ env_name="code_review",
52
+ max_concurrent_envs=1, # increase this number to allow more concurrent WebSocket sessions
53
+ )
54
+
55
+
56
+ def main(host: str = "0.0.0.0", port: int = 8000):
57
+ """
58
+ Entry point for direct execution via uv run or python -m.
59
+
60
+ This function enables running the server without Docker:
61
+ uv run --project . server
62
+ uv run --project . server --port 8001
63
+ python -m code_review.server.app
64
+
65
+ Args:
66
+ host: Host address to bind to (default: "0.0.0.0")
67
+ port: Port number to listen on (default: 8000)
68
+
69
+ For production deployments, consider using uvicorn directly with
70
+ multiple workers:
71
+ uvicorn code_review.server.app:app --workers 4
72
+ """
73
+ import uvicorn
74
+
75
+ uvicorn.run(app, host=host, port=port)
76
+
77
+
78
+ if __name__ == "__main__":
79
+ import argparse
80
+
81
+ parser = argparse.ArgumentParser()
82
+ parser.add_argument("--port", type=int, default=8000)
83
+ args = parser.parse_args()
84
+ main(port=args.port)
server/code_review_environment.py ADDED
@@ -0,0 +1,338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Code Review Environment Implementation.
9
+
10
+ A simple test environment that echoes back messages sent to it.
11
+ Perfect for testing HTTP server infrastructure.
12
+ """
13
+
14
+ from uuid import uuid4
15
+
16
+ from openenv.core.env_server.interfaces import Environment
17
+ from openenv.core.env_server.types import State
18
+
19
+ try:
20
+ from ..models import (
21
+ CodeReviewAction,
22
+ CodeReviewObservation,
23
+ CodeReviewReward,
24
+ CodeReviewPullRequest,
25
+ CodeReviewStepResponse
26
+ )
27
+ except ImportError:
28
+ from models import (
29
+ CodeReviewAction,
30
+ CodeReviewObservation,
31
+ CodeReviewReward,
32
+ CodeReviewPullRequest,
33
+ )
34
+
35
+ import json
36
+ from pathlib import Path
37
+
38
+ dataset_path = Path(__file__).parent.parent / "dataset" / "dataset.json"
39
+
40
+ class CodeReviewEnvironment(Environment):
41
+ """
42
+ A simple echo environment that echoes back messages.
43
+
44
+ This environment is designed for testing the HTTP server infrastructure.
45
+ It maintains minimal state and simply echoes back whatever message it receives.
46
+
47
+ Example:
48
+ >>> env = CodeReviewEnvironment()
49
+ >>> obs = env.reset()
50
+ >>> print(obs.echoed_message) # "Code Review environment ready!"
51
+ >>>
52
+ >>> obs = env.step(CodeReviewAction(message="Hello"))
53
+ >>> print(obs.echoed_message) # "Hello"
54
+ >>> print(obs.message_length) # 5
55
+ """
56
+
57
+ # Enable concurrent WebSocket sessions.
58
+ # Set to True if your environment isolates state between instances.
59
+ # When True, multiple WebSocket clients can connect simultaneously, each
60
+ # getting their own environment instance (when using factory mode in app.py).
61
+ SUPPORTS_CONCURRENT_SESSIONS: bool = True
62
+
63
+ def __init__(self):
64
+ """Initialize the code_review environment."""
65
+ self._state = State(episode_id=str(uuid4()), step_count=0)
66
+ self._reset_count = 0
67
+ self.max_steps = 5
68
+ self.task_index = 0
69
+ with open(dataset_path) as f:
70
+ self.dataset = json.load(f)
71
+ self.reset()
72
+
73
+ def reset(self) -> CodeReviewObservation:
74
+ """
75
+ Reset the environment.
76
+
77
+ Returns:
78
+ CodeReviewObservation with a ready message
79
+ """
80
+ self._state = State(episode_id=str(uuid4()), step_count=0)
81
+ self._reset_count += 1
82
+ self.task_index += 1
83
+
84
+ self.sample = self.dataset[self.task_index % len(self.dataset)]
85
+
86
+ self.pr = CodeReviewPullRequest(**self.sample["pr"])
87
+ self.gt = self.sample["ground_truth"]
88
+ self.task_type = self.sample.get("task_type", "unknown")
89
+
90
+ self.history = []
91
+ self.step_count = 0
92
+ self.done = False
93
+
94
+ # State evolution variables
95
+ self.issues_identified = []
96
+ self.fix_attempted = False
97
+
98
+ return CodeReviewObservation(
99
+ #echoed_message="Code Review environment ready!",
100
+ pr=self.pr,
101
+ previous_comments=self.history,
102
+ step_count=self.step_count,
103
+ max_steps=self.max_steps,
104
+ reward=0.0,
105
+ done=False,
106
+ )
107
+
108
+ def step(self, action: CodeReviewAction) -> CodeReviewObservation: # type: ignore[override]
109
+ """
110
+ Execute a step in the environment by echoing the message.
111
+
112
+ Args:
113
+ action: CodeReviewAction containing the message to echo
114
+
115
+ Returns:
116
+ CodeReviewObservation with the echoed message and its length
117
+ """
118
+ self._state.step_count += 1
119
+ # print("RAW ACTION TYPE:", type(action))
120
+ # print("RAW ACTION:", action)
121
+
122
+ try:
123
+ if isinstance(action, dict):
124
+ action = CodeReviewAction(**action)
125
+
126
+ elif isinstance(action, (list, tuple)):
127
+ action = CodeReviewAction(
128
+ action_type=action[0],
129
+ comment=action[1] if len(action) > 1 else None,
130
+ suggested_code=action[2] if len(action) > 2 else None,
131
+ decision=action[3] if len(action) > 3 else None,
132
+ )
133
+
134
+ elif isinstance(action, CodeReviewAction):
135
+ pass
136
+
137
+ else:
138
+ raise ValueError(f"Unsupported action type: {type(action)}")
139
+ except Exception as e:
140
+ print(f"Error occurred while processing action: {e}")
141
+ return self._invalid_step()
142
+
143
+ self.step_count += 1
144
+ self.history.append(action)
145
+
146
+ if action.action_type == "comment" and action.comment:
147
+ self.issues_identified.append(action.comment)
148
+
149
+ if action.action_type == "suggest_fix":
150
+ self.fix_attempted = True
151
+
152
+ score = self.grade_action(action, self.gt)
153
+ print(f"Step {self.step_count} - Score: {score:.4f}")
154
+
155
+ bonus = 0.0
156
+
157
+ # Encourage meaningful comments
158
+ if action.comment and len(action.comment) > 30:
159
+ bonus += 0.1
160
+
161
+ # Encourage early correct decisions
162
+ if action.action_type == "final_decision" and self.step_count <= 2:
163
+ bonus += 0.1
164
+
165
+ # Penalize useless steps
166
+ if not action.comment and action.action_type != "final_decision":
167
+ bonus -= 0.1
168
+
169
+ # Penalize long trajectories
170
+ if self.step_count > 3:
171
+ bonus -= 0.05
172
+
173
+ score += bonus
174
+ score = max(0.0, min(score, 1.0))
175
+ # print("Final Score == " , score)
176
+
177
+ done = (
178
+ action.action_type == "final_decision" or self.step_count >= self.max_steps
179
+ )
180
+
181
+ if done:
182
+ score = max([self.grade_action(a, self.gt) for a in self.history] or [0.0])
183
+
184
+ # print(type(CodeReviewObservation))
185
+ # print(type(CodeReviewReward))
186
+
187
+ obs = CodeReviewObservation(
188
+ pr=self.pr,
189
+ previous_comments=[a.comment for a in self.history if a.comment],
190
+ step_count=self.step_count,
191
+ max_steps=self.max_steps,
192
+ )
193
+ # print("Obs == " , obs)
194
+
195
+ rew = CodeReviewReward(
196
+ score=score,
197
+ feedback="graded"
198
+ )
199
+
200
+ # print("FINAL REWARD TYPE:", type(rew))
201
+ # print("FINAL REWARD:", rew)
202
+ # print("Got the culprit I guess....")
203
+
204
+ return CodeReviewStepResponse(
205
+ observation=obs,
206
+ reward=rew,
207
+ done=done,
208
+ info={
209
+ "task_type": self.task_type,
210
+ "issues_identified": len(self.issues_identified),
211
+ "fix_attempted": self.fix_attempted,
212
+ },
213
+ )
214
+
215
+ @property
216
+ def state(self) -> State:
217
+ """
218
+ Get the current environment state.
219
+
220
+ Returns:
221
+ Current State with episode_id and step_count
222
+ """
223
+ return self._state
224
+
225
+ def _invalid_step(self):
226
+ rew = CodeReviewReward(score=0.0, feedback="invalid action")
227
+ obs = CodeReviewObservation(
228
+ echoed_message="Invalid action format. Please send a valid CodeReviewAction.",
229
+ pr=self.pr,
230
+ previous_comments=[a.comment for a in self.history if a.comment],
231
+ step_count=self.step_count,
232
+ max_steps=self.max_steps,
233
+ )
234
+ return CodeReviewStepResponse(
235
+ observation=obs,
236
+ reward=rew,
237
+ done=True,
238
+ info={"error": "invalid_action"},
239
+ )
240
+
241
+
242
+ def grade_action(self, action, ground_truth):
243
+ score = 0.0
244
+
245
+ print("Action === ", action)
246
+ print("Ground truth === ", ground_truth)
247
+
248
+ # ------------------------------
249
+ # ISSUE DETECTION (40%)
250
+ # ------------------------------
251
+ issue_score = self.score_issues(action.comment, ground_truth)
252
+ score += 0.4 * issue_score
253
+ print("After Issue Score == ", issue_score)
254
+
255
+ # ------------------------------
256
+ # FIX QUALITY (30%)
257
+ # ------------------------------
258
+ fix_score = self.score_fix(action.suggested_code, ground_truth)
259
+ score += 0.3 * fix_score
260
+
261
+ print("After Fix Score == ", fix_score)
262
+
263
+ # ------------------------------
264
+ # DECISION (30%)
265
+ # ------------------------------
266
+ decision_score = self.score_decision(action, ground_truth)
267
+ score += 0.3 * decision_score
268
+
269
+ print("After Decision Score == ", decision_score)
270
+
271
+ # ------------------------------
272
+ # CLAMP SCORE
273
+ # ------------------------------
274
+ score = max(0.0, min(score, 1.0))
275
+
276
+ return score
277
+
278
+ def normalize(self, text):
279
+ return (text or "").lower().strip()
280
+
281
+ # ==============================
282
+ # ISSUE MATCH (PARTIAL CREDIT)
283
+ # ==============================
284
+ def score_issues(self, comment, ground_truth):
285
+ issues = ground_truth.get("issues", [])
286
+ if not comment or not issues:
287
+ return 0.0
288
+
289
+ comment = self.normalize(comment)
290
+
291
+ matches = sum(1 for issue in issues if self.normalize(issue) in comment)
292
+
293
+ return matches / len(issues)
294
+
295
+ # ==============================
296
+ # FIX MATCH (FUZZY)
297
+ # ==============================
298
+ def score_fix(self, suggested_code, ground_truth):
299
+ if not suggested_code:
300
+ return 0.0
301
+
302
+ expected_fix = self.normalize(ground_truth.get("fix", ""))
303
+ suggested_code = self.normalize(suggested_code)
304
+
305
+ # direct match
306
+ if expected_fix in suggested_code:
307
+ return 1.0
308
+
309
+ # partial keyword match
310
+ keywords = expected_fix.split()
311
+ if not keywords:
312
+ return 0.0
313
+
314
+ matches = sum(1 for word in keywords if word in suggested_code)
315
+
316
+ return matches / len(keywords)
317
+
318
+ # ==============================
319
+ # DECISION MATCH
320
+ # ==============================
321
+ def score_decision(self, action, ground_truth):
322
+ expected = ground_truth.get("decision")
323
+
324
+ # Not a decision step → no contribution
325
+ if action.action_type != "final_decision":
326
+ return 0.0
327
+
328
+ # Missing decision → small penalty
329
+ if not action.decision:
330
+ return 0.0
331
+
332
+ # Correct decision
333
+ if action.decision == expected:
334
+ return 1.0
335
+
336
+ # Wrong decision → partial penalty (not negative)
337
+ return 0.2
338
+
server/requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ openenv[core]>=0.2.0
2
+ fastapi>=0.115.0
3
+ uvicorn>=0.24.0
4
+
5
+
6
+
uv.lock ADDED
The diff for this file is too large to render. See raw diff