-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to achieve replicable results in SenseAct? #51
Comments
Hi @fisherxue, we had a similar discussion in this issue. Please look at the entire discussion and try the steps listed there. If you're still running into issues, let me know. |
What I've done: saved the random state in a file, then loaded it using pickle. I do this before the env is loaded. I then set the tensorflow random seed and the python random seed after sess.enter() I am also passing the random state I load from file into the environment. I am able to get relatively consistent results when I run two simulations at the same time. However, when I run one after the other, I get vastly different results. Any advice? This is what I have:
Thanks! |
@gauthamvasan Any tips? |
Sorry for the simple question.
I'm trying to replicate my results across runs in SenseAct using PPO. I've set a constant seed to get a fixed random state and have verified that the state is the same across runs. However, the simulator seems to still be randomly generating both the initial network and targets/resets (the returns are very inconsistent, as are the observations).
In Appendix A.5 of Benchmarking Reinforcement Learning Algorithms on Real-World Robots, it is mentioned that "For the agent, randomization is used to initialize the network and sample actions. For the environment, randomization is used to generate targets and resets. By using the same randomization seed across multiple experiments in this set of experiments, we ensure that the environment generates the same sequence of targets and resets, the agent is initialized with the same network, and it generates the same or similar sequence of actions for a particular task. "
Could someone please clarify how this is done? I have attempted to set numpy and tf with some fixed seed. Furthermore, I have attempted to set each tensorflow operation to use some fixed seed.
Thanks!
The text was updated successfully, but these errors were encountered: