⚠️ Alpha Release Notice
This is an alpha release for MOPS data generation. The public API might still change, and some bugs might be present.
This project builds on ManiSkill3, a simulation framework built on SAPIEN, which requires Python 3.10.
We recommend uv for environment and dependency management.
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create a Python 3.10 virtual environment and install all dependencies
uv venv --python 3.10
source .venv/bin/activate
uv pip install -e .-
Download RoboCasa assets (required for kitchen dataset)
python -m mani_skill.utils.download_asset RoboCasa
-
Download PartNet Mobility assets (required for all datasets)
Get the assets from SAPIEN UCSD and place them underdata/partnet_mobility/. -
Expected directory structure
mops-data/ ├── data/ │ ├── partnet_mobility/ # PartNet-Mobility assets │ └── mops_data/ # generated datasets (output) ├── scripts/ # dataset generation entry points ├── demos/ # visualisation & exploration scripts ├── src/ │ └── mops_data/ # core library └── ...
Three dataset variants are provided. Each has a debug mode (small, fast) and a full mode (production quality).
Isolated PartNet-Mobility objects rendered from multiple viewpoints with varied lighting.
# Debug run (quick sanity check)
python scripts/generate_single_object.py --debug
# Full generation
python scripts/generate_single_object.py
# Custom output path
python scripts/generate_single_object.py --output data/mops_data/my_single_objObjects placed inside RoboCasa kitchen environments, rendered from table-level and overhead viewpoints.
python scripts/generate_kitchen.py --debug # debug run
python scripts/generate_kitchen.py # full generationCluttered tabletop scenes with multiple objects rendered from top-down viewpoints.
python scripts/generate_clutter.py --debug # debug run
python scripts/generate_clutter.py # full generationEach script accepts --output <path> to override the output directory.
For deeper customisation, edit the corresponding config class in src/mops_data/generation/:
| Dataset | Config class | Module |
|---|---|---|
| Single-object | SingleObjectDatasetConfig |
mops_data.generation.single_object_dataset |
| Kitchen | KitchenDatasetConfig |
mops_data.generation.kitchen_dataset |
| Clutter | ClutterDatasetConfig |
mops_data.generation.clutter_dataset |
Key parameters shared by all configs (BaseDatasetConfig):
| Parameter | Description |
|---|---|
output_path |
Path to the output dataset directory (Parquet) |
image_size |
(width, height) in pixels |
target_train_images_per_set |
Training images per object/scene set |
target_test_images_per_set |
Test images per object/scene set |
min_assets_per_class |
Minimum assets required to include a class |
light_temp_range |
Kelvin range for light colour temperature variation |
light_intensity_range |
Range for light intensity variation |
obs_mode |
Observation channels (rgb+depth+segmentation+normal) |
For code development, install the dev dependencies and configure pre-commit hooks:
uv pip install -e ".[dev]"
pre-commit install
pre-commit runThis project builds on ManiSkill3, so much of its official documentation is applicable to MOPS-data.
- Custom environments accept an optional
np_rngkeyword argument (a NumPyGeneratorobject) for reproducible scene sampling. AffordanceKitchenEnv-v1supports an optionalprerollargument to deterministically recreate a specific kitchen setup.
| Component | Description |
|---|---|
scripts/ |
High-level dataset generation entry points |
demos/ |
Visualisation and exploration scripts |
src/mops_data |
Core library |
mops_data.generation |
Dataset generation pipelines and configuration |
mops_data.asset_manager |
PartNet-Mobility annotation loading and asset handling |
mops_data.envs |
Custom ManiSkill3 environments for rendering |
mops_data.render |
Observation augmentation and shader configuration |
xr_teleop |
WebXR-based VR teleoperation controller. Experimental |
- 🎨 Photoreal Simulation: High-quality visual rendering for computer vision
- 🤖 Robotic Manipulation: Specialised environments for manipulation tasks
- 🏠 Kitchen Environments: Realistic household scenarios built on RoboCasa
- 📦 Object Diversity: Support for cluttered tabletop and single-object scenarios
- 🔧 Extensible: Modular pipeline and config design for custom datasets
If you use MOPS-data in your research, please cite:
@article{li2026mops,
title={Multi-Objective Photoreal Simulation (MOPS) Dataset for Computer Vision in Robotic Manipulation},
author={
Maximilian Xiling Li and
Paul Mattes and
Nils Blank and
Rudolf Lioutikov
},
year={2026}
}We welcome contributions! Please see the development setup above and feel free to submit issues and pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.