PointCloudVR is a modular system for streaming and rendering 3D point cloud data in VR using Meta Quest headsets. The system supports two different pipelines depending on where the point cloud is generated: on-device GPU reconstruction from RGB-D streams or precomputed point cloud streaming.
![]() Latency |
![]() Push-T Task |
The architecture consists of two independent modes:
- The streamer sends RGB + Depth images to the headset.
- The Quest reconstructs the point cloud in real time using GPU compute shaders.
- Rendering is fully GPU-accelerated inside Unity.
Pipeline:
Camera → Python Streamer → (RGB + Depth) → Quest → Compute Shader → Point Cloud Rendering
Key characteristics:
- Lightweight streamer (only image resizing + transmission)
- Flexible resolution control via downsampling
- High performance rendering on Quest GPU
- No multi-view fusion on streamer side
- The streamer generates the complete point cloud on the host machine.
- The Quest only receives already-processed 3D points.
- Rendering is done via optimized GPU shaders in Unity.
Pipeline:
Camera → Python Streamer → Point Cloud Generation → Stream → Quest → GPU Point Cloud Renderer
Key characteristics:
-
More computational flexibility on host machine
-
Supports:
- segmentation
- filtering & denoising
- point cloud fusion (multi-view support)
-
More efficient runtime on Quest
-
Enables multi-camera setups (MV possible)
- Real-time streaming with minimal latency
- GPU-accelerated rendering in Unity (compute shaders + instancing)
- Modular pipeline separation (capture → process → render)
- Support for scalable point cloud resolution
Both modes use a shared rendering approach:
- GPU instancing / compute shaders
- Efficient point cloud buffers
- Real-time updates via streaming interface
| Mode | Bottleneck | Strength |
|---|---|---|
| RGB-D Streaming | Network + reconstruction shader | Low bandwidth, simple pipeline |
| Point Cloud Streaming | CPU generation on host | High flexibility, multi-view capable |
-
Mode A (RGB-D):
- Minimal streamer complexity
- No multi-view fusion on host
- Reconstruction happens on Quest GPU
-
Mode B (Point Cloud):
- Full control over point cloud generation
- Supports segmentation, filtering, merging
- More efficient rendering pipeline on Quest
Further details can be found in Docu
MIT License

