Spaces:
Sleeping
Tighten README: resolve GRPO contradiction, drop duplicate baseline table, remove internal mentor docs
Browse filesThree structural fixes after a structural review of the submission docs:
1. GRPO contradiction (the worst issue): the README's headline section
said GRPO refine 'lifted OOD by +0.023' while the Training section
later said 'Stage 3 — Skipped in our headline result'. Both were
simultaneously visible to a careful reader. Rewrote Stage 3 to say
what actually happened: GRPO refine ran on top of SFT, lifted OOD
generalization by +0.023 and discrete-3 by +0.013, no in-dist
regression. Added the actual submission command.
2. Duplicate 'Baseline Scores (v2 grader)' table near line 255 had:
- 'TBD' for the OOD teacher (which was 0.621 in the headline)
- '(see results.md)' for the distilled student (which was already
in the headline table)
Pure noise; deleted. Headline table is the canonical reference.
3. Removed two docs that were internal references, not for judges:
- docs/WhatMAkesSubmissionStandOut.md (mentor's stand-out criteria
pasted in as a checklist for our own use)
- docs/references/judging_criteria.md (the hackathon's own rules
mirrored back at the judges)
Both looked unprofessional in a public submission. No README links
pointed to either after the cleanup.
- README.md +15 -16
- docs/WhatMAkesSubmissionStandOut.md +0 -61
- docs/references/judging_criteria.md +0 -177
|
@@ -252,19 +252,6 @@ hidden weights and started exploiting them.
|
|
| 252 |
> weight) fixed the structural mismatch. See [`docs/iterations.md`](docs/iterations.md)
|
| 253 |
> for the full journey.
|
| 254 |
|
| 255 |
-
## Baseline Scores (v2 grader)
|
| 256 |
-
|
| 257 |
-
100 in-dist seeds + 50 OOD seeds. Baselines emit no belief.
|
| 258 |
-
|
| 259 |
-
| Condition | Random | Heuristic | gpt-5.4 Teacher | Distilled Qwen 3B |
|
| 260 |
-
|---|---|---|---|---|
|
| 261 |
-
| **continuous-in-distribution** | 0.402 | 0.449 | **0.617** | (see [results.md](docs/results.md)) |
|
| 262 |
-
| **continuous-OOD** | 0.397 | 0.454 | TBD | (see [results.md](docs/results.md)) |
|
| 263 |
-
|
| 264 |
-
The distilled student is trained to imitate the teacher (gpt-5.4 via
|
| 265 |
-
Azure AI Foundry) on 30 episodes of trajectories. SFT installs the
|
| 266 |
-
reasoning + format priors; no GRPO needed if SFT alone hits the bar.
|
| 267 |
-
|
| 268 |
## Training: Algorithm Distillation
|
| 269 |
|
| 270 |
We train via [Algorithm Distillation](https://arxiv.org/abs/2210.14215) — a
|
|
@@ -293,9 +280,21 @@ hf jobs uv run --flavor a10g-large --secrets HF_TOKEN \
|
|
| 293 |
-d scripts/sft_on_hf.py
|
| 294 |
```
|
| 295 |
|
| 296 |
-
**
|
| 297 |
-
|
| 298 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 299 |
|
| 300 |
### Why algorithm distillation, not GRPO from scratch
|
| 301 |
|
|
|
|
| 252 |
> weight) fixed the structural mismatch. See [`docs/iterations.md`](docs/iterations.md)
|
| 253 |
> for the full journey.
|
| 254 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 255 |
## Training: Algorithm Distillation
|
| 256 |
|
| 257 |
We train via [Algorithm Distillation](https://arxiv.org/abs/2210.14215) — a
|
|
|
|
| 280 |
-d scripts/sft_on_hf.py
|
| 281 |
```
|
| 282 |
|
| 283 |
+
**Stage 3 — GRPO refine on top of SFT.** Run GRPO with the existing 4-layer
|
| 284 |
+
reward stack starting from the SFT'd checkpoint (lr 1e-5, beta 0.1 KL anchor).
|
| 285 |
+
This lifts OOD generalization by another **+0.023** and discrete-3 by +0.013
|
| 286 |
+
without regressing in-dist. The GRPO-refined model is uploaded to
|
| 287 |
+
[`InosLihka/rhythm-env-meta-trained-sft-grpo-v1`](https://huggingface.co/InosLihka/rhythm-env-meta-trained-sft-grpo-v1).
|
| 288 |
+
The bulk of the improvement still comes from SFT (Stage 2); GRPO refine is
|
| 289 |
+
the polish.
|
| 290 |
+
|
| 291 |
+
```bash
|
| 292 |
+
hf jobs uv run --flavor a10g-large --secrets HF_TOKEN \
|
| 293 |
+
-e MODEL_NAME=InosLihka/rhythm-env-meta-trained-sft-v3 \
|
| 294 |
+
-e MAX_STEPS=200 -e LEARNING_RATE=1e-5 -e BETA=0.1 \
|
| 295 |
+
-e MODEL_REPO_SUFFIX=sft-grpo-v1 \
|
| 296 |
+
-d scripts/train_on_hf.py
|
| 297 |
+
```
|
| 298 |
|
| 299 |
### Why algorithm distillation, not GRPO from scratch
|
| 300 |
|
|
@@ -1,61 +0,0 @@
|
|
| 1 |
-
Pick an ambitious, original problem
|
| 2 |
-
The themes (problems) are deliberately open. Use them as launching pads, not boxes. Judges have seen a lot of chess, snake, tic-tac-toe, and grid-world clones. To score well on innovation,
|
| 3 |
-
you need a genuinely fresh angle. Some questions to ask yourself:
|
| 4 |
-
Does this environment exist to teach an LLM something it currently can’t do well?
|
| 5 |
-
Is the domain underexplored in RL/LLM training?
|
| 6 |
-
Could a researcher write a paper about training on this?
|
| 7 |
-
|
| 8 |
-
Design a reward signal that actually teaches
|
| 9 |
-
A great environment has a reward function that:
|
| 10 |
-
Provides a rich, informative signal (not just 0/1 at the end)
|
| 11 |
-
Captures something hard to measure in a clever way
|
| 12 |
-
Uses OpenEnv’s Rubric system thoughtfully (composable rubrics > monolithic scoring)
|
| 13 |
-
Is hard to game; an agent that exploits the reward without solving the task should not get high scores
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
Show real training, end to end
|
| 18 |
-
The bar isn’t “training script exists.” The bar is “training script runs against the environment, the
|
| 19 |
-
agent learns, and you can show it.” Concretely:
|
| 20 |
-
Your training loop should connect to your environment (not a static dataset)
|
| 21 |
-
Train long enough that the curves mean something
|
| 22 |
-
Compare a trained agent vs. a random/untrained baseline; quantitative and/or qualitative
|
| 23 |
-
Include the plots and numbers in your README and writeup
|
| 24 |
-
|
| 25 |
-
Make your plots readable
|
| 26 |
-
Reviewers spend seconds, not minutes, on each plot. Help them out:
|
| 27 |
-
Label both axes (e.g. “training step” / “episode” on x, “reward” / “loss” on y) and include units where they apply
|
| 28 |
-
Save plots as .png or .jpg and commit them to the repo (don’t leave them only in a Colab cell or a deleted Wandb run) (if you ran via WANBD, please include the link to that specific run of your plots)
|
| 29 |
-
Embed the key plots in your README with a one-line caption explaining what each one shows If you have multiple runs (baseline vs. trained, ablations, etc.), put them on the same axes so the comparison is obvious
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
Tell a story, not an API doc
|
| 35 |
-
Your README, blog, and pitch should answer:
|
| 36 |
-
Problem) what capability gap or interesting domain are you targeting?
|
| 37 |
-
Environment) what does the agent see, do, and get rewarded for?
|
| 38 |
-
Results) what changed after training? Show it.
|
| 39 |
-
Why does it matter) who would care, and why?
|
| 40 |
-
|
| 41 |
-
A reviewer should be able to read your README in 3~5 minutes and want to try your
|
| 42 |
-
environment.
|
| 43 |
-
|
| 44 |
-
NOTE: If you have a video, HF post, or anything else interesting, please make sure that it’s linked
|
| 45 |
-
from your README.
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
Engineer it cleanly (table stakes)
|
| 50 |
-
Engineering quality matters less than ambition, but sloppy work hurts. Make sure you:
|
| 51 |
-
Use OpenEnv’s Environment / MCPEnvironment base classes properly
|
| 52 |
-
Respect the client / server separation (clients should never import server internals)
|
| 53 |
-
Follow the standard Gym-style API (reset, step, state)
|
| 54 |
-
Have a valid openenv.yaml manifest
|
| 55 |
-
Don’t use reserved tool names (reset, step, state, close) for MCP tools
|
| 56 |
-
|
| 57 |
-
Final Note
|
| 58 |
-
Judges are looking for environments that push the frontier of what we can train LLMs to do. Be
|
| 59 |
-
ambitious. Pick a problem you find genuinely interesting; that almost always produces better
|
| 60 |
-
work than chasing what you think judges want. Good luck.
|
| 61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@@ -1,177 +0,0 @@
|
|
| 1 |
-
# **Theme \#1 \- Multi-Agent Interactions**
|
| 2 |
-
|
| 3 |
-
Environments for this theme involve cooperation, competition, negotiation, and coalition formation. Learning from these environments will enable agents to model the beliefs and incentives of others in partially observable settings. This drives theory-of-mind reasoning and emergent strategic behavior.
|
| 4 |
-
|
| 5 |
-
**Expected Outcome**: an environment that can be used to train multi-agent task handling in a LLM
|
| 6 |
-
|
| 7 |
-
**Example environments:** Market simulations, compute-allocation negotiations, collaborative puzzle worlds, mixed cooperative/competitive strategy games.
|
| 8 |
-
|
| 9 |
-
# **Theme \#2 \- (Super) Long-Horizon Planning & Instruction Following**
|
| 10 |
-
|
| 11 |
-
You will build environments that require deep, multi-step reasoning with sparse or delayed rewards. After using these environments, the goal is to enable agents to decompose goals, track state over extended trajectories, and recover from early mistakes. The aim is to push beyond shallow next-token reasoning toward structured planning and durable internal representations.
|
| 12 |
-
|
| 13 |
-
**Expected Outcome**: an environment that can capture and improve LLM behaviour on challenging long horizon tasks that need long running sessions beyond context memory limits.
|
| 14 |
-
|
| 15 |
-
**Example environments:** (Think of OpenClaw workflows with Multi-turn tasks). Research-planning simulators, large-scale codebase refactoring tasks, strategic resource management worlds, long-horizon logistics optimization, extremely complicated long-horizon instruction following (e.g., 300 instructions scattered around).
|
| 16 |
-
|
| 17 |
-
# **Theme \#3 \- World Modeling**
|
| 18 |
-
|
| 19 |
-
## \#3.1 Professional Tasks
|
| 20 |
-
|
| 21 |
-
Here you will develop environments that require real interaction with tools, APIs, or dynamic systems where the model is expected to do real hard work instead of exploiting short-cuts to arrive at the desired outcome. Learning from these environments will enable agents to maintain consistent internal state, update beliefs based on outcomes, and orchestrate multi-step workflows. The goal is to strengthen causal reasoning and persistent world models.
|
| 22 |
-
|
| 23 |
-
**Expected Outcome**: an environment capturing nuances of a defined partially observable world and improve LLM interaction with it
|
| 24 |
-
|
| 25 |
-
**Example environments:** Dynamic browser/API ecosystems, enterprise applications, scientific workflow loops (papers → code → experiments), economic simulations with feedback, tool-discovery benchmarks.
|
| 26 |
-
|
| 27 |
-
## \#3.2 Personalized Tasks
|
| 28 |
-
|
| 29 |
-
Here we will develop an environment that offers real personalized task handling, imagine replying to personal messages or handling dinner conflicts due to work conflicts, replying to tough emails. Think any personal assistant tasks
|
| 30 |
-
|
| 31 |
-
**Expected Outcome:** An environment that gives the model a realistic simulation of handling personal tasks, conflicts and managing them as delegations
|
| 32 |
-
|
| 33 |
-
**Example environments:** Executive Assistant Meeting Planner, Dinner and drive planning, email and message replying, shopping, etc
|
| 34 |
-
|
| 35 |
-
# **Theme \#4 \- Self-Improvement**
|
| 36 |
-
|
| 37 |
-
The focus here is to create environments where agents can learn to generate new challenges, escalate difficulty, and improve through self-play or adaptive curricula. Rather than optimizing fixed tasks, the goal is for agents to learn to drive their own capability growth. The objective is recursive skill amplification.
|
| 38 |
-
|
| 39 |
-
**Expected Outcome**: an environment for improving self-play of a LLM over a defined set of tasks
|
| 40 |
-
|
| 41 |
-
**Example environments:** Self-play negotiation arenas, auto-generated math/proof tasks, evolving coding competitions, adaptive RL curricula.
|
| 42 |
-
|
| 43 |
-
## **Theme \#5: Wild Card \- Impress Us\!**
|
| 44 |
-
|
| 45 |
-
We do not want to limit your focus if your idea doesn’t fit the boxes above, we want and WILL reward out of box tasks, please be creative but remember to add submissions that meaningfully add value to LLM training on a certain task.
|
| 46 |
-
|
| 47 |
-
#
|
| 48 |
-
|
| 49 |
-
# **Guidelines for Problem Statement**
|
| 50 |
-
|
| 51 |
-
* It is **NOT** mandatory to choose the same problem statement as Round 1\. Only choose the same problem statement if it aligns with the above provided Hackathon themes.
|
| 52 |
-
* You can start working on your problem statement once you have finalized it. Post-training can be done onsite on 25th & 26th when you receive compute credits for HuggingFace.
|
| 53 |
-
* Before the onsite, we suggest you work on building the environment, agent behaviours, reward model and evaluate if your work aligns with the [judging criteria](#bookmark=id.m45eoo902jo0) given below.
|
| 54 |
-
|
| 55 |
-
# **Judging Criteria**
|
| 56 |
-
|
| 57 |
-
**Minimum requirements**:
|
| 58 |
-
|
| 59 |
-
* Usage of OpenEnv (latest release)
|
| 60 |
-
* Show a minimal training script for your environment using Unsloth or HF TRL in Colab
|
| 61 |
-
* Write a mini-blog on HuggingFace or mini-video on YouTube talking about your submission, \<2 minutes
|
| 62 |
-
* Your OpenEnv compliant environment should be hosted on Hugging Face Spaces.
|
| 63 |
-
|
| 64 |
-
**Judging Overview**
|
| 65 |
-
|
| 66 |
-
* **Evaluation:** Teams will be scored based on the following criteria:
|
| 67 |
-
1. **Environment Innovation (40%):** Is the environment novel, creative, or challenging? Does it meaningfully test the agent’s behavior?
|
| 68 |
-
2. **Storytelling (30%):** Does the team clearly explain the problem, environment, and agent behavior? Is the demo engaging and easy to follow?
|
| 69 |
-
3. **Showing Improvement in Rewards (20%):** Does the demo provide observable evidence of training progress (reward curves, metrics, or before/after behavior)?
|
| 70 |
-
4. **Reward and Training Script/Pipeline Setup (10%):** Is the reward logic coherent, and does the pipeline produce meaningful improvement in the agent’s inference (how it acts in the environment)?
|
| 71 |
-
|
| 72 |
-
**OpenEnv Hackathon \- What Judges Look For**
|
| 73 |
-
|
| 74 |
-
This guide tells you what makes a strong submission for the OpenEnv Hackathon (India 2026).
|
| 75 |
-
Read it before you start building, and again before you submit.
|
| 76 |
-
|
| 77 |
-
For the list of themes and example problems, refer to the top sections.
|
| 78 |
-
|
| 79 |
-
**NOTE:** Please remember only one submission per team. If you have multiple ideas, pick the best one and go for it. Please make sure that the URL link of your environment is submitted as judges will pull the environment from the URL to evaluate it. Changes or commits after the submission deadline will not be considered.
|
| 80 |
-
|
| 81 |
-
**TL;DR**
|
| 82 |
-
|
| 83 |
-
Build an environment that an LLM could actually be trained on to get measurably better at
|
| 84 |
-
something interesting. Then show that training. Then tell the story.
|
| 85 |
-
|
| 86 |
-
A messy but ambitious environment with real training evidence beats a polished but boring one.
|
| 87 |
-
Pick a problem that excites you (that energy comes through in the pitch).
|
| 88 |
-
|
| 89 |
-
**Judging Criteria**
|
| 90 |
-
|
| 91 |
-
| Criterion: Environment InnovationWeight: 40%What it means:Is the environment novel, creative, or genuinely challenging?Does it meaningfully test agent behavior in a way that hasn't been done before? |
|
| 92 |
-
| :---- |
|
| 93 |
-
|
| 94 |
-
| Criterion: Storytelling & PresentationWeight: 30%What it means:Can you clearly explain the problem, the environment, and what the agent learned?Is the demo engaging and easy to follow for a non-technical audience? |
|
| 95 |
-
| :---- |
|
| 96 |
-
|
| 97 |
-
| Criterion: Showing Improvement in RewardsWeight: 20%What it means:Is there observable evidence of training progress? Reward curves, before/after behavior,comparison against a baseline \-- anything that proves the agent learned something. |
|
| 98 |
-
| :---- |
|
| 99 |
-
|
| 100 |
-
| Criterion: Reward & Training PipelineWeight: 10%What it means:Is the reward logic coherent? Does the pipeline produce meaningful improvement in the trainedagent's behavior? |
|
| 101 |
-
| :---- |
|
| 102 |
-
|
| 103 |
-
**Minimum Submission Requirements**
|
| 104 |
-
|
| 105 |
-
**NOTE:** These are **non-negotiable**. Submissions missing any of these are at a serious disadvantage.
|
| 106 |
-
|
| 107 |
-
- [ ] **Use OpenEnv** (latest release). Build on top of the framework; don’t reinvent the wheel.
|
| 108 |
-
- [ ] **A working training script** using **Unsloth or Hugging Face TRL**, ideally as a Colab notebook so judges can re-run it.
|
| 109 |
-
- [ ] **Evidence that you actually trained**; at minimum, loss and reward plots from a real run.
|
| 110 |
-
- [ ] **A short writeup**: a mini-blog on Hugging Face or a \< 2 minute video on YouTube explaining what your environment does and what you trained, or a short slide deck of presentation. Please make sure that all materials are linked from your README file so that judges can access them easily.
|
| 111 |
-
- [ ] **Push your environment to a Hugging Face Space** so it’s discoverable and runnable.
|
| 112 |
-
- [ ] **A README** that motivates the problem, explains how the env works, and shows results.
|
| 113 |
-
- [ ] README should have a link to the environment in the Hugging Face Space. It should also have all additional references to other materials (e.g. videos, blog posts, slides, presentations, etc.) that you want to include.
|
| 114 |
-
- [ ] Please do not include big video files in your Env submission on HF Hub as we would like to have a small size for each env (Please use url as reference link to additional materials).
|
| 115 |
-
|
| 116 |
-
**What Makes a Submission Stand Out**
|
| 117 |
-
|
| 118 |
-
***Pick an ambitious, original problem***
|
| 119 |
-
The themes (problems) are deliberately open. Use them as launching pads, not boxes. Judges have seen a lot of chess, snake, tic-tac-toe, and grid-world clones. To score well on innovation,
|
| 120 |
-
you need a genuinely fresh angle. Some questions to ask yourself:
|
| 121 |
-
|
| 122 |
-
* Does this environment exist to teach an LLM something it currently can’t do well?
|
| 123 |
-
* Is the domain underexplored in RL/LLM training?
|
| 124 |
-
* Could a researcher write a paper about training on this?
|
| 125 |
-
|
| 126 |
-
***Design a reward signal that actually teaches***
|
| 127 |
-
A great environment has a reward function that:
|
| 128 |
-
|
| 129 |
-
* Provides a **rich, informative signal** (not just 0/1 at the end)
|
| 130 |
-
* Captures something **hard to measure** in a clever way
|
| 131 |
-
* Uses OpenEnv’s **Rubric system** thoughtfully (composable rubrics \> monolithic scoring)
|
| 132 |
-
* Is **hard to game**; an agent that exploits the reward without solving the task should not get high scores
|
| 133 |
-
|
| 134 |
-
***Show real training, end to end***
|
| 135 |
-
The bar isn’t “training script exists.” The bar is “training script runs against the environment, the
|
| 136 |
-
agent learns, and you can show it.” Concretely:
|
| 137 |
-
|
| 138 |
-
* Your training loop should connect to **your** environment (not a static dataset)
|
| 139 |
-
* Train long enough that the curves mean something
|
| 140 |
-
* Compare a **trained agent vs. a random/untrained baseline**; quantitative and/or qualitative
|
| 141 |
-
* Include the plots and numbers in your README and writeup
|
| 142 |
-
|
| 143 |
-
***Make your plots readable***
|
| 144 |
-
Reviewers spend seconds, not minutes, on each plot. Help them out:
|
| 145 |
-
|
| 146 |
-
* **Label both axes** (e.g. “training step” / “episode” on x, “reward” / “loss” on y) and include units where they apply
|
| 147 |
-
* Save plots as *.png* or *.jpg* and **commit them to the repo** (don’t leave them only in a Colab cell or a deleted Wandb run) (if you ran via Wandb, please include the link to that specific run of your plots)
|
| 148 |
-
* **Embed the key plots in your README** with a one-line caption explaining what each one shows If you have multiple runs (baseline vs. trained, ablations, etc.), put them on the same axes so the comparison is obvious
|
| 149 |
-
|
| 150 |
-
***Tell a story, not an API doc***
|
| 151 |
-
Your README, blog, and pitch should answer:
|
| 152 |
-
|
| 153 |
-
1. **Problem)** what capability gap or interesting domain are you targeting?
|
| 154 |
-
2. **Environment)** what does the agent see, do, and get rewarded for?
|
| 155 |
-
3. **Results)** what changed after training? Show it.
|
| 156 |
-
4. **Why does it matter)** who would care, and why?
|
| 157 |
-
|
| 158 |
-
*A reviewer should be able to read your README in 3\~5 minutes and want to try your*
|
| 159 |
-
*environment.*
|
| 160 |
-
|
| 161 |
-
**NOTE:** If you have a video, HF post, or anything else interesting, please make sure that it’s linked
|
| 162 |
-
from your README as a link.
|
| 163 |
-
|
| 164 |
-
***Engineer it cleanly (table stakes)***
|
| 165 |
-
Engineering quality matters less than ambition, but sloppy work hurts. Make sure you:
|
| 166 |
-
|
| 167 |
-
* Use OpenEnv’s Environment / MCPEnvironment base classes properly
|
| 168 |
-
* Respect the **client / server separation** (clients should never import server internals)
|
| 169 |
-
* Follow the standard Gym-style API (reset, step, state)
|
| 170 |
-
* Have a valid openenv.yaml manifest
|
| 171 |
-
* Don’t use reserved tool names (reset, step, state, close) for MCP tools
|
| 172 |
-
|
| 173 |
-
**Final Note**
|
| 174 |
-
|
| 175 |
-
Judges are looking for environments that push the frontier of what we can train LLMs to do. Be
|
| 176 |
-
ambitious. Pick a problem you find genuinely interesting; that almost always produces better
|
| 177 |
-
work than chasing what you think judges want. Good luck.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|