A collection of tools for doing reinforcement learning research in Julia.
Make it easy for new users to run benchmark experiments, compare different algorithms, evaluate and diagnose agents.
Facilitate reproducibility from traditional tabular methods to modern deep reinforcement learning algorithms.
Provide elaborately designed components and interfaces to help users implement new algorithms.
A number of built-in environments and third-party environment wrappers are provided to evaluate algorithms in various scenarios.
ReinforcementLearning.jl is a wrapper package which contains a collection of different packages in the JuliaReinforcementLearning organization. You can simply run many built-in experiments in 3 lines.
julia> ] add ReinforcementLearningExperiments
julia> using ReinforcementLearningExperiments
In ReinforcementLearningAnIntroduction.jl, we reproduced most figures in the famous book: Reinforcement Learning: An Introduction (Second Edition). You can try those examples interactively online with the help of MyBinder and learn many tabular reinforcement learning algorithms.
In ReinforcementLearningZoo.jl, many deep reinforcement learning algorithms are implemented, including DQN, C51, Rainbow, IQN, A2C, PPO, DDPG, etc. All algorithms are written in a composable way, which make them easy to read, understand and extend.
Join the Julia Reinforcement Learning community to learn, contribute, and get your questions answered.
Ask package usage questions, discuss designs and propose new features through github issues. Contributions through pull requests are warmly welcomed!
Ask general reinforcement learning related questions on Julia discourse in the #machinelearning domain, or on the Slack in #reinforcement-learnin channel.
This website is built with Franklin.jl of the DistillTemplate (licensed under Apache License 2.0) and Documenter.jl. The source code of this website is licensed under MIT License. The JuliaReinforcementLearning organization was first created by Johanni Brea and then co-maintained by Jun Tian. And we thank all the contributors .