# ReinforcementLearningDatasets.jl

ReinforcementLearningDatasets.AtariDataSetType

Represents an Iterable dataset with the following fields:

Fields

• dataset::Dict{Symbol, Any}: representation of the dataset as a Dictionary with style as style.
• epochs::Vector{Int}: list of epochs loaded.
• repo::String: the repository from which the dataset is taken.
• length::Int: the length of the dataset.
• batch_size::Int: the size of the batches returned by iterate.
• style::Tuple{Symbol}: the style of the Iterator that is returned, check out: SARTS, SART and SA for types supported out of the box.
• rng<:AbstractRNG.
• meta::Dict: the metadata provided along with the dataset.
• is_shuffle::Bool: determines if the batches returned by iterate are shuffled.
source
ReinforcementLearningDatasets.BufferedShuffleType
BufferedShuffle(src::Channel{T}, buffer::Vector{T}, rng<:AbstractRNG)

This type holds the output of buffered_shuffle function and subtypes AbstractChannel. Therefore, it acts as a channel that holds a shuffled buffer which is of type Vector{T}.

Fields

• src::Channel{T}, The source Channel.
• buffer::Vector{T}, The shuffled buffer.
• rng<:AbstractRNG.
source
ReinforcementLearningDatasets.D4RLDataSetType

Represents an Iterable dataset with the following fields:

Fields

• dataset::Dict{Symbol, Any}: representation of the dataset as a Dictionary with style as style.
• repo::String: the repository from which the dataset is taken.
• dataset_size::Int, the number of samples in the dataset.
• batch_size::Int: the size of the batches returned by iterate.
• style::Tuple{Symbol}: the style of the Iterator that is returned, check out: SARTS, SART and SA for types supported out of the box.
• rng<:AbstractRNG.
• meta::Dict: the metadata provided along with the dataset.
• is_shuffle::Bool: determines if the batches returned by iterate are shuffled.
source
ReinforcementLearningDatasets.RingBufferMethod
RingBuffer(f!, buffer, taskref=nothing)

Return a RingBuffer that gives batches with the specs in buffer.

Arguments

• f!: the inplace operation to do in the buffer.
• buffer::T: the type containing the batch.
• sz::Int:size of the internal buffers.
source
ReinforcementLearningDatasets.buffered_shuffleMethod
buffered_shuffle(src::Channel{T}, buffer_size::Int; rng=Random.GLOBAL_RNG)

Returns a BufferedShuffle Channel.

Arguments:

• src::Channel{T}. The source Channel.
• buffer_size::Int. The size of the buffered channel.
• rng<:AbstractRNG = Random.GLOBAL_RNG.
source
ReinforcementLearningDatasets.d4rl_policyMethod
d4rl_policy(env, agent, epoch)

Return a D4RLGaussianNetwork from deep_ope with preloaded weights. Check deep_ope with preloaded weights for more info. Check out d4rlpolicyparams() for more info on arguments.

Arguments

• env::String: name of the env.
• agent::String: can be dapg or online.
• epoch::Int: can be in 0:10.
source
ReinforcementLearningDatasets.datasetMethod
dataset(dataset, index, epochs; <keyword arguments>)

Create a dataset enclosed in a AtariDataSet Iterable type. Contain other related metadata for the dataset that is passed. The returned type is an infinite or a finite Iterator respectively depending upon whether isshuffle is true or false. For more information regarding the dataset, refer to [google-research/batchrl](https://github.com/google-research/batchrl). Check out atariparams() for more info on arguments.

Arguments

• dataset::String: name of the datset.
• index::Int: analogous to v and different values correspond to different seeds that are used for data collection. can be between [1:5].
• epochs::Vector{Int}: list of epochs to load. included epochs should be between [0:50].
• style::NTuple=SARTS: the style of the Iterator that is returned. can be SARTS, SART or SA.
• repo::String="atari-replay-datasets": name of the repository of the dataset.
• rng::AbstractRNG=StableRNG(123).
• is_shuffle::Bool=true: determines if the dataset is shuffled or not.
• batch_size::Int=256 batch_size that is yielded by the iterator.
Warning

The dataset takes up significant amount of space in RAM. Therefore it is advised to load even one epoch with 20GB of RAM. We are looking for ways to use lazy data loading here and any contributions are welcome.

source
ReinforcementLearningDatasets.datasetMethod
dataset(dataset; <keyword arguments>)

Create a dataset enclosed in a D4RLDataSet Iterable type. Contain other related metadata for the dataset that is passed. The returned type is an infinite or a finite Iterator respectively depending upon whether is_shuffle is true or false. For more information regarding the dataset, refer to D4RL. Check out d4rlpybulletdatasetparams() or d4rldataset_params().

Arguments

• dataset::String: name of the datset.
• repo::String="d4rl": name of the repository of the dataset. can be "d4rl" or "d4rl-pybullet".
• style::Tuple{Symbol}=SARTS: the style of the Iterator that is returned. can be SARTS, SART or SA.
• rng<:AbstractRNG=StableRNG(123).
• is_shuffle::Bool=true: determines if the dataset is shuffled or not.
• batch_size::Int=256: batch_size that is yielded by the iterator.
Note

FLOW and CARLA supported by D4RL have not been tested in this package yet.

source
ReinforcementLearningDatasets.deep_ope_d4rl_evaluateMethod
deep_ope_d4rl_evaluate(env_name, agent, epoch; <keyword arguments>)

Return the UnicodePlot for the env_name, agent, epoch that is given. Provide gym_env_name for specifying the environment explicitly. γ is the discount factor which defaults to 1. Seed of the env can be provided in env_seed.

source
ReinforcementLearningDatasets.rl_unplugged_atari_datasetMethod
rl_unplugged_atari_dataset(game, run, shards; <keyword arguments>)

Return a RingBuffer of AtariRLTransition batches which supports multi threaded loading. Check out rl_unplugged_atari_params() for more info on arguments.

Arguments

• game::String: name of the dataset.
• run::Int: run number. can be in the range 1:5.
• shards::Vector{Int}: the shards that are to be loaded.
• shuffle_buffer_size::Int=10_000: size of the shuffle_buffer used in loading AtariRLTransitions.
• tf_reader_bufsize::Int=1*1024*1024: the size of the buffer bufsize that is used internally in TFRecord.read.
• tf_reader_sz::Int=10_000: the size of the Channel, channel_size that is returned by TFRecord.read.
• batch_size::Int=256: The number of samples within the batches that are returned by the Channel.
• n_preallocations::Int=nthreads()*12: the size of the buffer in the Channel that is returned.
Note

To enable reading records from multiple files concurrently, remember to set the number of threads correctly (See JULIANUMTHREADS).

source
ReinforcementLearningDatasets.rl_unplugged_bsuite_datasetMethod
rl_unplugged_bsuite_dataset(game, shards, type; <keyword arguments>)

Return a RingBuffer(@ref) of BSuiteRLTransition batches which supports multi threaded loading. Check out bsuite_params() for more info on arguments.

Arguments

• game::String: name of the dataset. available datasets: cartpole, mountain_car and catch.
• shards::Vector{Int}: the shards that are to be loaded.
• type::String: can be full, full_train and full_valid.
• is_shuffle::Bool
• stochasticity::Float32: represents the stochasticity of the dataset. can be

in the range: 0.0:0.1:0.5.

• shuffle_buffer_size::Int=10_000: size of the shuffle_buffer used in loading AtariRLTransitions.
• tf_reader_bufsize::Int=10_000: the size of the buffer bufsize that is used internally

in TFRecord.read.

• tf_reader_sz::Int=10_000: the size of the Channel, channel_size that is returned by

TFRecord.read.

• batch_size::Int=256: The number of samples within the batches that are returned by the Channel.
• n_preallocations::Int=nthreads()*12: the size of the buffer in the Channel that is returned.
Note

To enable reading records from multiple files concurrently, remember to set the number of threads correctly (See JULIANUMTHREADS).

source
ReinforcementLearningDatasets.rl_unplugged_dm_datasetMethod
rl_unplugged_dm_dataset(game, shards; <keyword arguments>)

Returns a RingBuffer(@ref) of NamedTuple containing SARTS batches which supports multi threaded loading. Also contains additional data. The data enclosed within :state and next_state is a NamedTuple consisting of all observations that are provided. Check out keys in DM_LOCOMOTION_HUMANOID, DM_LOCOMOTION_RODENT, DM_CONTROL_SUITE_SIZE for supported datasets. Also check out dm_params() for more info on arguments.

Arguments

• game::String: name of the dataset.
• shards::Vector{Int}: the shards that are to be loaded.
• type::String: type of the dmenv. can be dmcontrolsuite,dmlocomotionhumanoid,dmlocomotion_rodent.
• is_shuffle::Bool
• stochasticity::Float32: represents the stochasticity of the dataset. can be

in the range: 0.0:0.1:0.5.

• shuffle_buffer_size::Int=10_000: size of the shuffle_buffer used in loading AtariRLTransitions.
• tf_reader_bufsize::Int=10_000: the size of the buffer bufsize that is used internally

in TFRecord.read.

• tf_reader_sz::Int=10_000: the size of the Channel, channel_size that is returned by

TFRecord.read.

• batch_size::Int=256: The number of samples within the batches that are returned by the Channel.
• n_preallocations::Int=nthreads()*12: the size of the buffer in the Channel that is returned.
Note

To enable reading records from multiple files concurrently, remember to set the number of threads correctly (See JULIANUMTHREADS).

source