
## From "Hello World" to RL Training in 5 Minutes āØ
**What if RL environments were as easy to use as REST APIs?**
That's OpenEnv. Type-safe. Isolated. Production-ready. šÆ
[](https://colab.research.google.com/github/meta-pytorch/OpenEnv/blob/main/examples/OpenEnv_Tutorial.ipynb)
[](https://github.com/meta-pytorch/OpenEnv)
[](https://opensource.org/licenses/BSD-3-Clause)
[](https://pytorch.org/)
Author: [Sanyam Bhutani](http://twitter.com/bhutanisanyam1/)
## Why OpenEnv?
Let's take a trip down memory lane:
It's 2016, RL is popular. You read some papers, it looks promising.
But in real world: Cartpole is the best you can run on a gaming GPU.
What do you do beyond Cartpole?
Fast-forward to 2025, GRPO is awesome and this time it's not JUST in theory, it works well in practise and is really here!
The problem still remains, how do you take these RL algorithms and take them beyond Cartpole?
A huge part of RL is giving your algorithms environment access to learn.
We are excited to introduce an Environment Spec for adding Open Environments for RL Training. This will allow you to focus on your experiments and allow everyone to bring their environments.
Focus on experiments, use OpenEnvironments, and build agents that go beyond Cartpole on a single spec.
---
## š What You'll Learn