SQLEnv: Teaching Small Models to Explore Databases Like Analysts
Two Agents, Same Question
DESCRIBE Documents Γ5 β same table, five times SAMPLE Documents Γ3 β already saw these rows DESCRIBE Templates β wrong table DESCRIBE Paragraphs β finally the right table QUERY SELECT * LIMIT 5 β no aggregation QUERY SELECT * LIMIT 5 β still no COUNT(*) ANSWER "76 | Robbin CV" β a random row
DESCRIBE employee β Employee_ID, Name, Age, City QUERY ...ORDER BY Salary DESC β Error: no such column: Salary DESCRIBE evaluation β Employee_ID, Year_awarded, Bonus QUERY ...JOIN...ORDER BY Bonus DESC β Louis Deacon ANSWER "Louis Deacon"
Both agents have the same four tools, the same 15-step budget, and the same databases. Different questions are shown to illustrate the range of behaviors; both agents were evaluated on the same 50-question set in the quantitative comparison below. The untrained agent wastes most of its steps without making progress. The trained agent first explores the schema, then hits an error, adapts, and solves a harder question in a third of the steps.
The Gap
Standard Spider-style text-to-SQL (Yu et al., 2018) gives the model the full schema up front and scores the resulting SQL with exact-match, execution, or test-suite accuracy. This setup rewards memorization. The model never learns to explore a schema or iterate toward a solution, so it struggles on unfamiliar databases with many tables where the full schema cannot fit in context.
SQLEnv takes a different approach. The agent progressively discovers the schema through its own actions: it starts with only table names and must call DESCRIBE, SAMPLE, and QUERY to reveal columns, types, and relationships within a fixed step budget. This is a POMDP (partially observable Markov decision process) where the agent acts under uncertainty, which makes exploration necessary and learnable.
What Analysts Actually Do
Consider the situation where you need to answer a question using data in an unfamiliar database. You probably cannot write the final query in one go. Instead, you run DESCRIBE to see what columns exist, SELECT * LIMIT 5 to scan the actual data, then build your query piece by piece, adjusting joins, fixing column names, and retrying after errors. The answer emerges from iteration.
SQLEnv captures this workflow. Four actions mirror what analysts do:
- DESCRIBE reveals column names and types for a table
- SAMPLE previews rows to understand the data (available but rarely used by the trained agent, which learned to rely on DESCRIBE and QUERY)
- QUERY executes a read-only SQL query
- ANSWER submits a final answer
Each episode starts with a natural-language question and a list of table names. Columns, types, and relationships stay hidden until the agent discovers them through exploration. This partial observability forces strategy over pattern-matching.
A clean episode on the question "List student IDs with registered courses and their course counts":
Step 1: DESCRIBE student_course_registrations
β student_id INTEGER, course_id INTEGER, registration_date DATETIME
β reward: +0.015
Step 2: DESCRIBE students
β student_id INTEGER, student_details VARCHAR(255)
β reward: +0.015
Step 3: QUERY SELECT T1.student_id, count(*)
FROM students AS T1
JOIN student_course_registrations AS T2
ON T1.student_id = T2.student_id
GROUP BY T1.student_id
β 111|1, 121|2, 131|1, 141|2, 151|1, 161|1, 171|1
β reward: +0.150
Step 4: ANSWER [[111,1],[121,2],[131,1],[141,2],[151,1],[161,1],[171,1]]
β correct
β reward: +1.000
Total reward: 1.180. Four steps. Exploration: 0.180, terminal: 1.000.
Built on OpenEnv
OpenEnv provides a Gymnasium-style interface for agentic RL environments. The contract is simple:
reset(seed)starts a new episode and returns the initial observationstep(action)executes one action and returns observation, reward, and done flag
Pydantic models enforce typed contracts between agent and environment. We tested integration with TRL's environment_factory for GRPO training. OpenEnv is also designed to work with torchforge and Unsloth, though we have not tested those integrations.
SQLEnv implements this protocol with four domain-specific actions:
env = SQLEnvironment(questions_path="...", db_dir="...", tokenizer=tok)
obs = env.reset(seed=42) # pick question, load DB, hide schema
obs = env.step(SQLAction(action_type="DESCRIBE", argument="employee"))
obs = env.step(SQLAction(action_type="QUERY", argument="SELECT COUNT(*) FROM employee"))
obs = env.step(SQLAction(action_type="ANSWER", argument="10"))
# obs.done=True, obs.reward=1.0
TRL's environment_factory auto-discovers the four tool methods (describe, sample, query, answer) from the environment class for GRPO training. The same environment runs locally, in Docker on HuggingFace Spaces, or over WebSocket via SQLEnvClient.
The Green Agent evaluator wraps this protocol for benchmarking:
evaluate(env, policy, n_episodes=50, seed=0)
This runs any Policy through the environment and reports success rate, average reward, and step count. Built-in RandomPolicy and OraclePolicy baselines provide lower and upper bounds (0% vs 100% accuracy, 0.25 vs 1.17 reward).
Reward Architecture
Three layers of reward signal:
| Layer | Signal | Per-step clip |
|---|---|---|
| L1: Operational | Successful execution (+0.02), new info (+0.01), repeat penalty (-0.03), step cost (-0.02) | [-0.10, 0.15] |
| L2: Progress | Delta from previous query result β cardinality, value overlap, numeric proximity | [-0.10, 0.15] |
| L3: Terminal | Correct answer: +1.0. Wrong: 0.0 | one-shot |
Terminal correctness dominates. Maximum exploration reward across 15 steps is ~0.3, while a correct answer adds 1.0. An agent that explores but never answers always scores below one that answers correctly. Prior work on tool-using agents suggests that dense intermediate rewards are important for training small models (see TIPS, ToolRL, StepTool below). We did not ablate this by testing terminal-only reward at 0.6B parameters, but the progressive reward signal let us verify that the agent was learning the right strategic patterns: reward climbed from -0.1 to 0.5-0.7 as the agent shifted from random tool calls to describe-then-query-then-answer sequences.
The L2 progress component is inspired by potential-based reward shaping (Ng et al., 1999). The raw delta follows the potential-difference form (phi(s') - phi(s)), but the combined reward includes binning and per-step clipping for training stability, which departs from the strict theoretical guarantee. We confirmed empirically that the potential-difference structure matters: cumulative progress caps (not potential-based) caused the agent to explore endlessly and never answer.
Note that the L2 progress reward uses the gold answer (executed gold SQL rows) as a training-time verifier signal. This supervision is computed server-side and never visible to the agent, but it means the dense shaping relies on ground truth that is not available at inference.
Recent work supports dense shaping for tool-using agents. TIPS (2026) reported 11.8% EM gains over PPO baselines in search-augmented multi-turn QA. ToolRL (2025) found 17% improvement over base models and 15% over SFT models through principled reward design for tool learning. StepTool (2024) found step-grained shaping outperformed outcome-only rewards in tool-use benchmarks.
Training
We train Qwen3-0.6B with Group Relative Policy Optimization (GRPO, from DeepSeekMath). TRL's environment_factory runs the agent through SQLEnv for each rollout, comparing multiple rollouts per question to compute advantages.
SFT warmup proved critical. Per-turn SFT (347 examples, each one assistant turn) taught the model to call describe forever. Half the training examples were describe calls, so the model learned "when asked a question, call describe." When we applied a KL penalty during RL, every rollout stayed identical to this SFT behavior. The advantage between rollouts was zero, so no policy gradient could form.
Multi-turn SFT (120 full trajectories with assistant_only_loss) taught describe-then-query-then-answer as a coherent strategy. The subsequent GRPO training refined this into error recovery, answer formatting, and knowing when to stop exploring.
Two-phase curriculum:
- Phase 1: Easy questions (single-table), KL penalty (beta=0.04) to keep the policy close to SFT initialization while allowing exploration. Reward climbs from -0.1 to 0.5-0.7 over 400 steps.
- Phase 2: Easy + medium (multi-table JOINs), KL removed (beta=0) so the agent can deviate further from SFT and discover new strategies. Reward holds at ~0.5.
Batch-mean reward across Phase 1 (easy, beta=0.04) and Phase 2 (easy+medium, beta=0). Reward starts negative and climbs to 0.5-0.7 as the agent learns describe-then-query-then-answer. Phase 2 holds at ~0.5. Individual episodes that solve the question correctly receive up to 1.15 total reward, but the batch mean is lower because most rollouts within a batch include incorrect attempts. 902 steps, ~4.75h on Colab L4.
SFT warmup takes ~1 minute (60 steps, loss drops from 1.6 to 0.08 in 2 epochs). GRPO Phase 1 runs ~2.25h, Phase 2 ~2.5h. Total pipeline: ~5 hours on a single Colab L4 (24GB VRAM), in one notebook session.
What the Agent Learned
The following behaviors emerged during training:
Evaluation (N=50 episodes, 2 independent runs)
All conditions run through SQLEnv's Green Agent evaluator: evaluate(env, policy, n_episodes, seed). The same harness powers the showcase notebook (Random vs Oracle baselines) and the full comparison below.
| Method | Accuracy | Parse Rate | Avg Steps |
|---|---|---|---|
| Zero-shot | 0% | 24-28% | 10.8-12.4 |
| 1-shot | 0-2% | 16-17% | 14.0-14.8 |
| 3-shot | 0% | 19-20% | 13.8-14.8 |
| GRPO v1 (2 epochs) | 28-30% | 95-100% | 3.5-4.0 |
| GRPO v2 (4 epochs) | 24-32% | 87-95% | 3.5-4.0 |
Two results stand out. Parse rate: the trained model (v1) produces valid tool-call JSON 95-100% of the time. The base model fails 76-83% of the time and burns its step budget repeating malformed output. Accuracy: the base model cannot answer a single question even with 3 examples, but the trained model solves 14-15 out of 50 on this curated Spider subset.
We trained two GRPO checkpoints: v1 (2 epochs) and v2 (4 epochs). Across two evaluation runs each, v1 scored 28-30% accuracy with 95-100% parse rate, while v2 scored 24-32% with 87-95% parse rate. The run-to-run variation (6-8 percentage points) at N=50 makes checkpoint-to-checkpoint differences hard to interpret. Extended training also introduced an abstention pattern: v2 sometimes outputs "Task complete" instead of calling answer() on uncertain questions, which increases parse failures but may reflect learned caution. On this subset, additional RL training did not improve accuracy, which indicates that the bottleneck is the model's 0.6B pretraining rather than the training budget.
Limitations at 0.6B Parameters
On this curated 10-database Spider subset, three failure modes define the current ceiling:
- Column name hallucination. The model reads
FullNamefrom DESCRIBE but writesfull_namein SQL, or readsHorsepowerand writesHorsepowerDESC(missing space). Pretraining biases override the schema that the model just observed in context. - FK chain reasoning. The model handles single-table queries well but fails on three-table JOINs such as Documents β Templates β Ref_Template_Types. It cannot chain foreign keys through intermediate tables.
- More RL does not help. Extended training (v2: 4 total epochs) produced identical accuracy. This indicates the ceiling comes from pretraining knowledge rather than training budget.
On this subset, RL drives accuracy from 0% to ~30% but saturates at 0.6B capacity. The eval set is easy-heavy (91% single/two-table questions, no hard questions) and uses N=50 episodes per condition, so these results should not be generalized beyond this setup. Train and eval use mostly separate databases, with one schema (flight_2) appearing in both. We did not explore whether SFT on multi-table reasoning or structured thinking before JOINs could push past this ceiling in our current work. We discuss possible directions in the Next Steps section.
This evaluation is not comparable to the official Spider leaderboard, which uses different scoring (test-suite accuracy), full-schema input, and a broader database set.
The Learning Space
The Green Agent evaluator brackets the learning space with two baselines:
| Policy | Accuracy | Avg Reward | Avg Steps |
|---|---|---|---|
| Random | 0% | 0.247 | 15.0 |
| GRPO (trained) | ~30% | ~0.35 | 3.5 |
| Oracle | 100% | 1.168 | 3.5 |
Random scores 0.247 by accumulating small exploration rewards across 15 steps without answering. Oracle scores 1.168 in 3.5 steps. This gap between 0.25 and 1.17 represents what a trained agent can learn. Our GRPO agent lands at ~0.35, above random but far below oracle, with room for improvement through better SFT warmup or larger models.
Technical Highlights
- 676 questions (473 train / 203 eval) across 10 Spider databases with difficulty labels
- Typed models with Pydantic: every action, observation, and state is explicit and debuggable
- Read-only SQL via SQLite
mode=ro, where the database engine enforces safety rather than regex - Reward shaping inspired by potential-based methods (Ng et al., 1999), with practical modifications for training stability
- TRL environment_factory integration for standard GRPO training without a custom loop
- Green Agent evaluator with
Policyprotocol,evaluate()harness, andRandomPolicy/OraclePolicybaselines
Next Steps
The environment supports two directions for improvement:
Thinking mode. The 30% ceiling comes from multi-table reasoning. The model cannot plan a three-table JOIN path before writing SQL. Qwen3's <think> blocks offer a way to reason about the join chain before writing the query. In our experiments, RL alone did not produce useful thinking: the model either emitted empty <think></think> blocks or collapsed into degenerate loops (<think>assistant<think>assistant...) that consumed ~23% of rollouts. Pure RL discovers that thinking tokens exist but not how to use them. SFT warmup with structured reasoning examples ("I need to join Documents β Templates β Ref_Template_Types through Template_ID") could bootstrap the format, then RL could refine when to think and when to skip. This is worth testing at 0.6B before concluding the ceiling requires a larger model.
Larger models. Our goal is small models that run locally, so scaling to 7B or beyond changes the deployment story. That said, a 1.7B model has more capacity to attend to DESCRIBE output and override pretrained column names. The environment and reward architecture do not depend on model size, so scaling up requires changing the training configuration rather than redesigning the environment. At some point, larger models may solve these questions with few-shot prompting alone, but the environment remains useful for training small models that need to run without API access.
Try It Yourself
- Training notebook:
notebooks/train_grpo.ipynbruns the full SFT + GRPO pipeline on Colab L4 in ~7 hours - Comparison notebook:
notebooks/compare_methods.ipynbevaluates base vs trained models side by side - Showcase notebook:
notebooks/showcase_sqlenv.ipynblets you explore the environment, run episodes, and see what tools and rewards are available - GitHub: full source, architecture docs, and training artifacts
Discussion
The format of SFT data matters more than the quantity. Per-turn SFT (347 examples) taught the model individual tool calls but not when to use them. The model called describe repeatedly because half the training examples were describe calls. Multi-turn SFT (120 full trajectories) taught the model to chain describe, query, and answer into a coherent episode. The difference was not the number of examples but whether each example showed a complete problem-solving sequence.
Transparent errors help the agent learn. When the environment returns "Error: no such column: full_name" instead of empty results, the agent can develop error-recovery strategies. Informative error messages give the RL training signal something to work with.
Dense rewards benefit from theoretical grounding. Potential-based shaping (Ng et al., 1999) provides the theoretical foundation for our reward design, though our implementation includes practical modifications (binning, clipping) that depart from the strict potential-difference form. Without some form of dense shaping, we observed agents accumulating exploration rewards instead of answering questions. Recent work supports this direction: TIPS (2026) reported gains over PPO baselines in multi-turn QA, ToolRL (2025) found improvements through principled reward decomposition, and StepTool (2024) found step-grained shaping outperformed outcome-only rewards. These results suggest that principled reward design is important for multi-turn environments.
The environment is the contribution. The action space, reward function, and episode structure do not depend on the choice of model or RL algorithm. SQLEnv targets small models that need to learn database exploration through training, since larger models can often handle these tasks with few-shot prompting alone. As newer small language models become available, the environment provides a training ground for teaching them iterative reasoning.