The habitat_baselines sub-package is NOT included upon installation by default. To install habitat_baselines, use the following command instead:
pip install -r requirements.txt
python setup.py develop --allThis will also install additional requirements for each sub-module in habitat_baselines/, which are specified in requirements.txt files located in the sub-module directory.
Proximal Policy Optimization (PPO)
paper: https://arxiv.org/abs/1707.06347
code: majority of the PPO implementation is taken from pytorch-a2c-ppo-acktr.
dependencies: pytorch 1.0, for installing refer to pytorch.org
For training on sample data please follow steps in the repository README. You should download the sample test scene data, extract it under the main repo (habitat-api/, extraction will create a data folder at habitat-api/data) and run the below training command.
train:
python -u habitat_baselines/run.py --exp-config habitat_baselines/config/pointnav/ppo_pointnav_example.yaml --run-type traintest:
python -u habitat_baselines/run.py --exp-config habitat_baselines/config/pointnav/ppo_pointnav_example.yaml --run-type evalWe also provide trained RGB, RGBD, Blind PPO models. To use them download pre-trained pytorch models from link and unzip and specify model path here.
The habitat_baselines/config/pointnav/ppo_pointnav.yaml config has better hyperparamters for large scale training and loads the Gibson PointGoal Navigation Dataset instead of the test scenes.
Change the field task_config in habitat_baselines/config/pointnav/ppo_pointnav.yaml to configs/tasks/pointnav_mp3d.yaml for training on MatterPort3D PointGoal Navigation Dataset.
SLAM based
- Handcrafted agent baseline adopted from the paper "Benchmarking Classic and Learned Navigation in Complex 3D Environments".
Episode iterator options: Coming very soon
Tensorboard and video generation support
Enable tensorboard by changing tensorboard_dir field in habitat_baselines/config/pointnav/ppo_pointnav.yaml.
Enable video generation for eval mode by changing video_option: tensorboard,disk (for displaying on tensorboard and for saving videos on disk, respectively)
Generated navigation episode recordings should look like this on tensorboard:
