Modality-invariant Visual Odometry for Embodied Vision 代码复现

代码地址

https://github.com/memmelma/VO-Transformer/tree/dev

环境配置

1.拉取github库

git clone https://github.com/memmelma/VO-Transformer.git
cd VO-Transformer/

2.创建环境
创建environment.yml

name: vot_nav
channels:
  - pytorch
  - conda-forge
dependencies:
  - python=3.7
  - cmake=3.14.0
  - numpy
  - numba
  - tqdm
  - tbb
  - joblib
  - h5py
  - pytorch=1.7.0
  - torchvision=0.8.0
  - cudatoolkit=11.0
  - pip
  - pip:
    - yacs
    - lz4
    - opencv-python
    - future 
    - numba 
    - numpy 
    - tqdm 
    - tbb 
    - joblib 
    - h5py 
    - opencv-python 
    - lz4 
    - yacs 
    - wandb 
    - tensorboard==1.15 
    - ifcfg 
    - jupyter 
    - gpustat 
    - moviepy 
    - imageio 
    - einops
conda env create -f environment.yml
conda activate vot_nav

3.安装habitat-sim

conda install -c aihabitat -c conda-forge habitat-sim=0.1.7 headless

4.安装habitat-lab

git clone https://github.com/facebookresearch/habitat-lab.git -b v0.1.7
cd habitat-lab/
pip install -r requirements.txt
python setup.py develop --all

5.Install timm and other

cd ..
git clone https://github.com/rwightman/pytorch-image-models.git 
cd pytorch-image-models 
pip install -e .
pip install protobuf==3.20

6.配置数据集和模型
直接参考原本github库

生成数据

./generate_data.sh --act_type -1 --N_list 250000 25000 --name_list 'train' 'val'

一个漫长的过程…
Modality-invariant Visual Odometry for Embodied Vision 代码复现_第1张图片

6.运行训练的

./start_vo.sh --config-yaml configs/vo/example_vo.yaml

7.运行eval脚本

./start_rl.sh --run-type eval --config-yaml configs/rl/evaluation/example_rl.yaml

你可能感兴趣的:(代码复现,python)