Experiment Manager¶
Parameters¶
- class rl_zoo3.exp_manager.ExperimentManager(args, algo, env_id, log_folder, tensorboard_log='', n_timesteps=0, eval_freq=10000, n_eval_episodes=5, save_freq=-1, hyperparams=None, env_kwargs=None, trained_agent='', optimize_hyperparameters=False, storage=None, study_name=None, n_trials=1, max_total_trials=None, n_jobs=1, sampler='tpe', pruner='median', optimization_log_path=None, n_startup_trials=0, n_evaluations=1, truncate_last_trajectory=False, uuid_str='', seed=0, log_interval=0, save_replay_buffer=False, verbose=1, vec_env_type='dummy', n_eval_envs=1, no_optim_plots=False, device='auto', config=None, show_progress=False)[source]¶
Experiment manager: read the hyperparameters, preprocess them, create the environment and the RL model.
Please take a look at train.py to have the details for each argument.
- create_envs(n_envs, eval_env=False, no_log=False)[source]¶
Create the environment and wrap it if necessary.
- Parameters:
n_envs (
int
) –eval_env (
bool
) – Whether is it an environment used for evaluation or notno_log (
bool
) – Do not log training when doing hyperparameter optim (issue with writing the same file)
- Return type:
VecEnv
- Returns:
the vectorized environment, with appropriate wrappers