Gym Documentation
Gym: A toolkit for developing and comparing reinforcement learning algorithms
https://github.com/openai/gym
Gym安装Atari环境(Windows,Linux适用)_李子树_的博客-CSDN博客
ale-py
atari-py
ale-py
and so there’s no difference between using ale-py
and Gym.pip install gym
pip install gym[atari]
pip install gym[all]
import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample()) # take a random action
env.close()
gym.make(id)
中id的命名方式是*[username/](env-name)-v(version)
*
from gym import envs
for env in envs.registry.all():
print(env.id)
输出结果
并不是所有的输出拿来都能用(主要是Atari相关的环境,有名字但是没有包的)
As of Gym v0.20 and onwards all Atari environments are provided via ale-py
. We do recommend using the new v5
environments in the ALE
namespace:
import gym
env = gym.make('ALE/Breakout-v5')
从0.20开始,gym转而用ale-py了,这里测试019版本时期gym的效果
pip install gym==0.19.0
pip install atari_py==0.2.6
0.19版本的gym和最新版的区别不是很大
安装0.2.6版本的atari,相关目录下会有需要的ROM。
但是测试时会报错
Could not find module ‘D:\02 Python Envs\old_gym_test\lib\site-packages\atari_py\ale_interface\ale_c.dll’ (or one of its dependencies). Try using the full path with constructor syntax.
网络上查看主要有三种解决思路:
Could not find module \atari_py\ale_interface\ale_c.dll (or one of its dependencies)
在windows上安装一些内容
‘module could not be found’ when running gym.make for atari environment. · Issue #1726 · openai/gym
使用conda进行安装
conda install -c conda-forge atari_py
下载缺失的文件到需要的文件加
总的来看,老版gym+atari-py的组合和新版gym+ale-py的区别主要在
新版组合想要用Atari的Rom时,需要自己下载
使用新版的gym时,调用atari游戏时不管是不是v5版本的,都要依照ale-py给出的渲染模式,即在程序创建环境时制定render_mode,后续程序中不再使用render函数
# 新版
import gym
env = gym.make('Breakout-v0', render_mode='human')
env.reset()
for _ in range(10000):
result = env.step(env.action_space.sample()) # take a random action
env.close()
# 老版
import gym
env = gym.make('Breakout-v0')
env.reset()
for _ in range(10000):
env.render()
result = env.step(env.action_space.sample()) # take a random action
env.close()
Arcade-Learning-Environment/docs at master · mgbellemare/Arcade-Learning-Environment
只能说非常不好找!这个包并没有一个很完善的官方网站。
从上面的说明文档中找到的一段示例代码,说明ale-py本身该怎么用
import sys
from random import randrange
from ale_py import ALEInterface
def main(rom_file):
ale = ALEInterface()
ale.setInt('random_seed', 123)
ale.loadROM(rom_file)
# Get the list of legal actions
legal_actions = ale.getLegalActionSet()
num_actions = len(legal_actions)
total_reward = 0
while not ale.game_over():
a = legal_actions[randrange(num_actions)]
reward = ale.act(a)
total_reward += reward
print(f'Episode ended with score: {total_reward}')
if __name__ == '__main__':
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} rom_file")
sys.exit()
rom_file = sys.argv[1]
main(rom_file)
The ALE now natively supports OpenAI Gym.
Although you could continue using the legacy environments as is we recommend using the new v5
environments
import gym
import ale_py
env = gym.make('ALE/Breakout-v5')
在创建环境时不推荐使用render,推荐使用以下做法
import gym
env = gym.make('Breakout-v0', render_mode='rgb_array')
env.reset()
_, _, _, metadata = env.step(0)
assert 'rgb_array' in metadata
render_mode
argument supports either human | rgb_array
. If rgb_array
is specified we’ll return the full RGB observation in the metadata dictionary returned after an agent step.在给出环境ID时,传统的方法是使用后缀加版本的方式,这些方式也还保留着;新的版本不再使用后缀了,后缀所表达的含义推荐使用关键词给出。
The legacy game IDs, environment suffixes -NoFrameskip
, -Deterministic
, and versioning -v0
, -v4
remain unchanged.
We do suggest that users transition to the -v5
versioning which is contained in the ALE
namespace.
With the new -v5
versioning we don’t support any ID suffixes such as -NoFrameskip
or -Deterministic
, instead you should configure the environment through keyword arguments as such:
import gym
env = gym.make('ALE/Breakout-v5',
obs_type='rgb', # ram | rgb | grayscale
frameskip=5, # frame skip
mode=0, # game mode, see Machado et al. 2018
difficulty=0, # game difficulty, see Machado et al. 2018
repeat_action_probability=0.25, # Sticky action probability
full_action_space=True, # Use all actions
render_mode=None # None | human | rgb_array
)
可以接受的命名包括
Pong-v0
PongNoFrameskip-v0
PongDeterministic-v0
Pong-v4
PongNoFrameskip-v0
PongDeterministic-v4
ALE/Pong-v5
ale-py支持的游戏在上面的说明文档有列出。
安装ale-py自带的有一个游戏叫Tetris
(俄罗斯方块)。使用如下代码,结合pygame,可以绘出图像。
import gym
env = gym.make('ALE/Tetris-v5', render_mode='human')
env.reset()
for _ in range(1000):
result = env.step(env.action_space.sample()) # take a random action
env.close()
安装更多的ROMs有以下方法
使用官方工具ale-import-roms
安装ale-py时,会在pip同文件夹下安装ale-import-roms.exe文件,如果下载好ROM后,可以在命令行中执行ale-import-roms roms/
命令来安装ROM(扩展名为bin的文件)(假设roms/
是存ROM的文件路径)
有一个第三方网站存在着大量的ROM,有一些教程推荐前去下载
Atari 2600 VCS ROM Collection
使用第三方包autorom
pip install autorom
AutoROM --accept-license
来自于以下问答
Error in importing environment OpenAI Gym
关于AutoROM的用法,github上的readme写的很清楚
https://github.com/Farama-Foundation/AutoROM
一个问题:隔了一段时间后,忽然又找不到了?
手动把autorom的文件移到了ale-py rom下面,又能找到了
卸载autorom,再安装并下载ROM到默认位置,系统可以识别
Arcade-Learning-Environment/visualization.md at master · mgbellemare/Arcade-Learning-Environment