
RAGEN
Training Agents by Reinforcing Reasoning
RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.

Comparison between RAGEN and existing LLM training methods.
StarPO (State-Thinking-Action-Reward Policy Optimization)
Initial State
Reasoning
Action
Reward

The StarPO (State-Thinking-Action-Reward Policy Optimization) framework with two interleaved stages: rollout stage and update stage.
The framework consists of two key components:
MDP Formulation
We formulate agent-environment interactions as Markov Decision Processes (MDPs) where states and actions are token sequences, allowing LLMs to reason over environment dynamics.
At time \(t\), state \(s_t\) transitions to the next state through action \(a_t\) following a transition function \(P(s_{t+1} | s_t, a_t)\). The policy \(\pi(a_t | s_t)\) generates actions given the trajectory history. The objective is to maximize expected cumulative rewards \(\mathbb{E}_\pi[\sum_t \gamma^t r_t]\) across multiple interaction turns.
StarPO: Reinforcing Reasoning via Trajectory-Level Optimization
StarPO is a general RL framework for optimizing entire multi-turn interaction trajectories for LLM agents. The algorithm alternates between two phases:
Rollout Stage: Reasoning-Interaction Trajectories
Given an initial state, the LLM generates multiple trajectories. At each step, the model receives the trajectory history and generates a reasoning-guided action:
<think>...reasoning process...</think><ans> action </ans>
The environment receives the action and returns feedback (reward and next state).
New state: Problem solved correctly.
Update Stage: Multi-turn Trajectory Optimization
After generating trajectories, we train LLMs to optimize expected rewards. Instead of step-by-step optimization, StarPO optimizes entire trajectories using importance sampling. This approach enables long-horizon reasoning while maintaining computational efficiency.
StarPO supports multiple optimization strategies:
PPO (Proximal Policy Optimization): We estimate token-level advantages using a value function over trajectories
GRPO (General Reward Policy Optimization): We assign normalized reward to the full trajectory
Rollout and update stages interleave in StarPO, enabling both online and offline learning.
RAGEN Trajectory Examples
Explore agent trajectories across different tasks. View state transitions, LLM-generated actions, and the decision-making process.
Loading trajectory data...
State

State description will appear here. This represents the environment's current state at the selected step.
LLM Action
Reasoning:
Let me think about the current state...
Action:
Findings
Key findings from our research on LLM reasoning stability and reinforcement learning dynamics.
Finding 1: Multi-turn training introduces new instability patterns
Adaptations from single-turn RL methods like PPO and GRPO achieve early gains in agent settings but often collapse. A critic in PPO may delay instability, but would not prevent reasoning degradation, highlighting the need for specialized stabilization in agent settings.
Finding 2: Model collapse in agent RL is reflected as "Echo Trap" over training
We find that early-stage agents respond with diverse symbolic reasoning, but collapse into deterministic, repetitive templates after training. Models converge to fixed phrasing, indicating that RL may reinforce superficial patterns instead of general reasoning and forms an "Echo Trap" that hinders long-term generalization.
Finding 3: Collapse follows similar dynamics and can be anticipated by indicators
Reward standard deviation and entropy often fluctuate before performance degrades, while gradient norm spikes typically mark the point of irreversible collapse. These metrics provide early indicators and motivate the need for stabilization strategies.
Finding 4: Uncertainty-based filtering improves training stability and efficiency
Filtering training data based on reward variance effectively combats the "Echo Trap". Retaining only the highly-uncertain training instances delays or prevents collapse across tasks and improves data efficiency.
Finding 5: Task diversity, action budget, and rollout frequency affect data quality
Diverse task instances enable better policy contrast and generalization across environments. Suitable action budgets provide enough planning space and avoid the noise introduced by overly long sequences. Up-to-date rollouts ensure optimization targets remain aligned with current policy behavior.