You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Rendering produces 2D images from 3D scene representations, yet how continuous variations in camera pose and scenes influence these images—and, consequently, downstream visual models—remains underexplored. We introduce <strong>Abstract Rendering</strong>, a framework that computes provable bounds on all images rendered under continuously varying camera poses and scenes. The resulting abstract image, expressed as a set of constraints over the image matrix, enables rigorous uncertainty propagation through downstream neural networks and thereby supports certification of model behavior under realistic 3D semantic perturbations, far beyond traditional pixel-level noise models.
28
33
</p><br>
@@ -33,46 +38,56 @@ <h2>Abstract</h2>
33
38
34
39
<pstyle="text-align: left;">
35
40
Our computed abstract images achieve up to <strong>3% over-approximation error</strong> compared to sampling results (baseline). Through experiments on classification (ResNet), object detection (YOLO), and pose estimation (GATENet) tasks, we demonstrate that abstract rendering enables formal certification of downstream models under realistic 3D variations—an essential step toward safety-critical vision systems.
<pstyle="text-align: left;">Lower bound and upper bound images are supposed to contain all images that can be rendered from given range of camera (scene) movement, like reference images.</p>
<pstyle="text-align: left;">Verified range of camera movement for which the downstream NN (ResNet classifer or GateNet pose estimator) is certified to work (color in green), and identify the ranges where it may fail (color in red).</p>
title={Abstract Rendering: Certified Rendering Under 3D Semantic Uncertainty},
65
76
author={Ji, Chenxi and Li, Yangge and Zhong, Xiangru and Zhang, Huan and Mitra, Sayan},
66
77
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
67
78
}
68
-
</pre><br>
79
+
</pre>
80
+
<br><br>
69
81
70
82
<h2>Acknowledgments and Funding</h2>
83
+
71
84
<pstyle="text-align: left;">
72
85
Chenxi Ji, Yangge Li, and Sayan Mitra are supported by a research grant from The Boeing Company and NSF (FMITF-2525287). Huan Zhang and Xiangru Zhong are supported in part by the AI2050 program at Schmidt Sciences (AI2050 Early Career Fellowship) and NSF (IIS SLES-2331967, CCF FMITF-2525287).
73
-
</p>
86
+
</p>
87
+
<br><br>
74
88
75
89
<h2>Reference</h2>
90
+
76
91
<pstyle="text-align: left;">
77
92
[1] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. <i>ACM Trans. Graph.</i>, 42(4):139–1, 2023.
0 commit comments