Spaces:
Sleeping
Sleeping
File size: 4,685 Bytes
3eb9552 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 | # Quick Start Guide - OpenEnv
Get up and running with OpenEnv in minutes!
## ๐ฆ Installation (5 minutes)
### Step 1: Clone Repository
```bash
git clone https://github.com/yourusername/OpenEnv.git
cd OpenEnv
```
### Step 2: Install Dependencies
```bash
pip install -r requirements.txt
```
### Step 3: Install Package (Optional)
```bash
pip install -e .
```
## ๐ Your First Environment (2 minutes)
### Minimal Example
```python
from openenv import OpenEnv
# Create environment
env = OpenEnv()
# Reset
obs, info = env.reset()
# Take random actions
for _ in range(100):
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
obs, info = env.reset()
env.close()
```
**That's it!** You've just run your first RL environment.
## ๐ฎ Try the Examples
### Basic Usage Demo
```bash
python examples/basic_usage.py
```
This runs through all basic features:
- Random agent
- Custom configuration
- State inspection
- Multiple episodes
- Config save/load
### Training with PPO
```bash
python examples/train_openenv.py --total_timesteps 50000
```
Watch the agent learn to navigate to the target!
## โ๏ธ Common Configurations
### Easy Mode (Beginner-Friendly)
```python
from openenv import EnvConfig, OpenEnv
config = EnvConfig(
episode_length=300, # Shorter episodes
boundary_limit=100.0, # Larger play area
max_velocity=150.0, # More forgiving
verbose=True,
)
env = OpenEnv(config=config)
```
### Hard Mode (Challenge)
```python
config = EnvConfig(
episode_length=200, # Shorter time
boundary_limit=20.0, # Smaller area
max_velocity=30.0, # Strict limits
sparse_rewards=True, # Only goal reward
friction=0.1, # More drag
)
env = OpenEnv(config=config)
```
### Visual Mode (Watch It Run)
```python
config = EnvConfig(
render_mode='human', # Show window
render_fps=60,
screen_size=(800, 600),
)
env = OpenEnv(config=config)
# In your loop
env.render() # Shows the environment
```
## ๐๏ธ Train Your First Agent (10 minutes)
### Using Stable Baselines3
```python
from stable_baselines3 import PPO
from openenv import OpenEnv
# Create environment
env = OpenEnv(render_mode=None)
# Initialize PPO agent
model = PPO("MlpPolicy", env, verbose=1)
# Train for 50,000 steps
model.learn(total_timesteps=50000)
# Save model
model.save("my_first_agent")
print("Training complete!")
```
### Load and Test
```python
from stable_baselines3 import PPO
from openenv import OpenEnv
# Load trained model
model = PPO.load("my_first_agent")
# Create environment for testing
env = OpenEnv(render_mode='human')
# Run trained agent
obs, _ = env.reset()
for _ in range(1000):
action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, _ = env.step(action)
env.render()
env.close()
```
## ๐ Troubleshooting
### Issue: "Module not found"
**Solution:** Make sure you're in the OpenEnv directory and installed dependencies:
```bash
cd OpenEnv
pip install -r requirements.txt
```
### Issue: "No module named 'openenv'"
**Solution:** Install the package in development mode:
```bash
pip install -e .
```
### Issue: Pygame errors on Windows
**Solution:** Reinstall pygame:
```bash
pip uninstall pygame
pip install pygame --no-cache-dir
```
### Issue: Slow performance
**Solution:** Disable rendering during training:
```python
env = OpenEnv(render_mode=None) # No rendering
```
## ๐ What's Next?
Now that you have the basics:
1. **Read the full documentation** - See README.md for complete API reference
2. **Explore examples/** - More complex use cases and patterns
3. **Run the tests** - `pytest tests/` to verify everything works
4. **Start your project** - Apply OpenEnv to your RL research!
## ๐ก Pro Tips
### Tip 1: Use Vectorized Environments
Train faster with parallel environments:
```python
from stable_baselines3.common.vec_env import DummyVecEnv
from openenv import OpenEnv, EnvConfig
config = EnvConfig()
env = DummyVecEnv([lambda: OpenEnv(config) for _ in range(4)])
```
### Tip 2: Monitor Training
Use TensorBoard for visualization:
```bash
tensorboard --logdir=./logs/openenv
```
### Tip 3: Reproducibility
Always set seeds for reproducible results:
```python
env = OpenEnv()
env.seed(42)
obs, _ = env.reset(seed=42)
```
## ๐ค Need Help?
- **Documentation:** README.md (full API reference)
- **Examples:** examples/ directory
- **Tests:** tests/ for usage patterns
- **Issues:** GitHub Issues for bugs
---
**Congratulations!** You're ready to start training RL agents with OpenEnv! ๐
Happy learning! ๐
|