Skip to content

YanJiangJerry/tutorial11

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Basic DQN

This repo has some basic DQN examples.

Requirements

I use conda to manage virtual environments so you will need Miniconda (or just install all the packages manually). To install the dependencies with conda use:

conda env create -f environment.yml

This way, PyTorch without GPU will be installed. If you have a GPU and want a GPU version, follow these instructions.

Running

python dqn_cartpole.py

With the default hyper-params it should start learning at about 13k frames and it should reach R100 of 195 at about 40k.

The hyper-parameters are hard-coded to make it easier to follow, but they should be moved to a config file.

OpenAI Gym Environments

dqn_gym.py has the same code but the hyper-params are moved to the config file for easier experimentation. Provides implementation of DQN with a single layer as well as 2 layers (controlled by -l ).

It also saves the trained model in saved_models/.

python dqn_gym.py -e CartPole-v1 -l 1
python dqn_gym.py -e LunarLander-v2 -l 1

About

DQN

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages