Welcome to the Reinforcement Learning (RL) Simulator API documentation! 🚀 This guide will help you understand how to interact with the RL Simulator API to build and test your reinforcement learning algorithms.
Key Features
- Environment Interaction: Control simulation parameters and agent behavior
- Training Metrics: Access real-time reward, episode stats, and convergence data
- Scenario Customization: Define custom environments and reward functions
- Visualization Tools: Generate interactive plots of training progress
Quick Start Example
import rl_simulator
# Initialize simulator
sim = rl_simulator.Simulator(environment="CartPole-v1", render_mode="human")
# Train agent
sim.train(epochs=100, learning_rate=0.001)
# Get results
print(sim.get_metrics()) # Output: {"total_reward": 498, "success_rate": 0.92}
API Endpoints
Endpoint | Description | Example |
---|---|---|
/api/simulate |
Start a new simulation | POST /api/simulate?env=MountainCar |
/api/train |
Train the agent | GET /api/train?algorithm=Q-Learning |
/api/analyze |
Retrieve performance analysis | GET /api/analyze?format=json |
Need Help?
If you're new to RL simulators, check out our Quick Start Guide for hands-on examples. 📚
For advanced customization options, explore the RL Simulator Configuration Docs. 🔧
Let me know if you'd like to dive deeper into specific API functionalities! 🌐