Habitat-Lab readme翻译

近期打算复现论文:Skill Transformer: A Monolithic Policy for Mobile Manipulation,该论文的的实验是在Habitat-Lab上做的,所以首先要安装Habitat-Lab环境,本文是对其readme文件的翻译。
地址:https://github.com/facebookresearch/habitat-lab

1 Habitat-Lab 概述

Habitat-Lab is a modular high-level library for end-to-end development in embodied AI. It is designed to train agents to perform a wide variety of embodied AI tasks in indoor environments, as well as develop agents that can interact with humans in performing these tasks. Habitat-Lab是一个模块化的高级库,用于嵌入式人工智能的端到端开发。它旨在训练智能体在室内环境中执行各种各样的具身人工智能任务,以及开发可以在执行这些任务时与人类互动的智能体。
Towards this goal, Habitat-Lab is designed to support the following features: 为此,Habitat-Lab旨在支持以下功能:
1、Flexible task definitions: allowing users to train agents in a wide variety of single and multi-agent tasks (e.g. navigation, rearrangement, instruction following, question answering, human following), as well as define novel tasks. 1、灵活的任务定义:允许用户在各种各样的单个和多代理任务中训练代理(例如,导航、重新安排、指令跟随、问题回答、人类跟随),以及定义新颖的任务。
2、Diverse embodied agents: configuring and instantiating a diverse set of embodied agents, including commercial robots and humanoids, specifying their sensors and capabilities. 2、多样的具身代理:配置和实例化一组多样的具身代理,包括商业机器人和类人机器人,指定它们的传感器和能力。
3、Training and evaluating agents: providing algorithms for single and multi-agent training (via imitation or reinforcement learning, or no learning at all as in SensePlanAct pipelines), as well as tools to benchmark their performance on the defined tasks using standard metrics. 3、训练和评估代理:提供用于单个和多代理训练的算法(通过模仿或强化学习,或者像在SensePlanAct管道中一样根本不学习),以及使用标准度量在定义的任务上对其性能进行基准测试的工具。
4、Human in the loop interaction: providing a framework for humans to interact with the simulator, enabling to collect embodied data or interact with trained agents. 4、人在回路中的交互:提供一个人与模拟器交互的框架,能够收集具身的数据或与受过训练的代理交互。
Habitat-Lab uses Habitat-Sim as the core simulator. For documentation refer here. Habitat-Lab使用Habitat-Sim作为核心模拟器。有关文档,请参考此处。

2 Installation

2.1 Preparing conda env (准备conda env)

Assuming you have conda installed, let’s prepare a conda env:
假设你已经安装了conda,准备一个conda env

# We require python>=3.9 and cmake>=3.14
conda create -n habitat python=3.9 cmake=3.14.0
conda activate habitat

2.2 conda install habitat-sim

To install habitat-sim with bullet physics
使用bullet physics安装habitat-sim

conda install habitat-sim withbullet -c conda-forge -c aihabitat

Note, for newer features added after the most recent release, you may need to install aihabitat-nightly. See Habitat-Sim’s installation instructions for more details.
注意,对于最新版本之后添加的新特性,您可能需要安装aihabitat-nightly。有关更多详细信息,请参见Habitat-Sim的安装说明。

2.3 pip install habitat-lab stable version.

git clone --branch stable https://github.com/facebookresearch/habitat-lab.git
cd habitat-lab
pip install -e habitat-lab  # install habitat_lab

2.4 Install habitat-baselines

The command above will install only core of Habitat-Lab. To include habitat_baselines along with all additional requirements, use the command below after installing habitat-lab:
上面的命令将只安装Habitat-Lab的核心。要包含habitat_baselines以及所有附加要求,请在安装habitat-lab后使用以下命令:

pip install -e habitat-baselines  # install habitat_baselines

3 Testing

3.1 Let’s download some 3D assets using Habitat-Sim’s python data download utility:

让我们使用Habitat-Sim的python数据下载实用程序下载一些3D资产:
Download (testing) 3D scenes:
下载测试3D场景

python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes --data-path data/

Note that these testing scenes do not provide semantic annotations.
注意,这些测试场景不提供语义注释。
Download point-goal navigation episodes for the test scenes:
下载测试场景的点目标导航集:

python -m habitat_sim.utils.datasets_download --uids habitat_test_pointnav_dataset --data-path data/

3.2 Non-interactive testing:

Test the Pick task: Run the example pick task script
测试 Pick任务:运行示例 Pick任务脚本

python examples/example.py

which uses habitat-lab/habitat/config/benchmark/rearrange/skills/pick.yaml for configuration of task and agent. The script roughly does this:
它使用habitat-lab/habitat/config/benchmark/re arrange/skills/pick.yaml来配置任务和代理。该脚本大致是这样的:

import gym
import habitat.gym

# Load embodied AI task (RearrangePick) and a pre-specified virtual robot
env = gym.make("HabitatRenderPick-v0")
observations = env.reset()

terminal = False

# Step through environment with random actions
while not terminal:
    observations, reward, terminal, info = env.step(env.action_space.sample())

To modify some of the configurations of the environment, you can also use the habitat.gym.make_gym_from_config method that allows you to create a habitat environment using a configuration.
要修改环境的某些配置,还可以使用habitat .gym . make_gym_from _ config方法,该方法允许您使用一个配置文件创建一个habitat环境。

config = habitat.get_config(
  "benchmark/rearrange/skills/pick.yaml",
  overrides=["habitat.environment.max_episode_steps=20"]
)
env = habitat.gym.make_gym_from_config(config)

If you want to know more about what the different configuration keys overrides do, you can use this reference.如果您想更多地了解不同的配置键覆盖的作用,您可以使用本参考资料。

See examples/register_new_sensors_and_measures.py for an example of how to extend habitat-lab from outside the source code.
关于如何从源代码外部扩展habitat-lab的示例,请参见examples/register _ new _ sensors _ and _ measures . py。

3.3.Interactive testing:

Using you keyboard and mouse to control a Fetch robot in a ReplicaCAD environment:
在ReplicaCAD环境中使用键盘和鼠标控制Fetch机器人:

# Pygame for interactive visualization, pybullet for inverse kinematics
pip install pygame==2.0.1 pybullet==3.0.4

# Interactive play script
python examples/interactive_play.py --never-end

Use I/J/K/L keys to move the robot base forward/left/backward/right and W/A/S/D to move the arm end-effector forward/left/backward/right and E/Q to move the arm up/down. The arm can be difficult to control via end-effector control. More details in documentation. Try to move the base and the arm to touch the red bowl on the table. Have fun!
使用I/J/K/L键向前/向左/向后/向右移动机器人基座,使用W/A/S/D键向前/向左/向后/向右移动机械臂末端执行器,使用E/Q键向上/向下移动机械臂。该臂可能难以通过末端执行器控制来控制。文档中有更多详细信息。试着移动底座和手臂去触摸桌子上的红色碗。玩得开心!
Note: Interactive testing currently fails on Ubuntu 20.04 with an error: X Error of failed request: BadAccess (attempt to access private resource denied). We are working on fixing this, and will update instructions once we have a fix. The script works without errors on MacOS.
注意:交互测试目前在Ubuntu 20.04上失败,错误:X Error of failed request: BadAccess (attempt to access private resource denied)。我们正在解决这个问题,一旦有了解决方案,我们将更新说明。该脚本在MacOS上正常运行。

4 Debugging an environment issue

Our vectorized environments are very fast, but they are not very verbose. When using VectorEnv some errors may be silenced, resulting in process hanging or multiprocessing errors that are hard to interpret. We recommend setting the environment variable HABITAT_ENV_DEBUG to 1 when debugging (export HABITAT_ENV_DEBUG=1) as this will use the slower, but more verbose ThreadedVectorEnv class. Do not forget to reset HABITAT_ENV_DEBUG (unset HABITAT_ENV_DEBUG) when you are done debugging since VectorEnv is much faster than ThreadedVectorEnv. 我们的矢量化环境速度非常快,但并不冗长。使用VectorEnv时,一些错误可能会被隐藏,从而导致难以解释的进程挂起或多重处理错误。我们建议在调试时将环境变量HABITAT_ENV_DEBUG设置为1(export HABITAT _ ENV _ DEBUG = 1 ),因为这将使用较慢但更详细的ThreadedVectorEnv类。完成调试后,不要忘记重置HABITAT_ENV_DEBUG(取消设置HABITAT_ENV_DEBUG ),因为VectorEnv比ThreadedVectorEnv快得多。

5 Documentation

Browse the online Habitat-Lab documentation and the extensive tutorial on how to train your agents with Habitat. For Habitat 2.0, use this quickstart guide. 浏览在线Habitat-Lab文档以及关于如何使用Habitat培训代理的大量教程。对于Habitat 2.0,请使用本快速入门指南。

6 Datasets

Common task and episode datasets used with Habitat-Lab.

7 Baselines

Habitat-Lab includes reinforcement learning (via PPO) baselines. For running PPO training on sample data and more details refer habitat_baselines/README.md.

8 ROS-X-Habitat

ROS-X-Habitat (https://github.com/ericchen321/ros_x_habitat) is a framework that bridges the AI Habitat platform (Habitat Lab + Habitat Sim) with other robotics resources via ROS. Compared with Habitat-PyRobot, ROS-X-Habitat places emphasis on 1) leveraging Habitat Sim v2’s physics-based simulation capability and 2) allowing roboticists to access simulation assets from ROS. The work has also been made public as a paper.

Note that ROS-X-Habitat was developed, and is maintained by the Lab for Computational Intelligence at UBC; it has not yet been officially supported by the Habitat Lab team. Please refer to the framework’s repository for docs and discussions.

你可能感兴趣的:(论文复现,人工智能)