You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<h4class="institute">University of Illinois, Urbana-Champaign</h4>
18
18
19
19
@@ -28,27 +28,22 @@ <h4 class="institute">University of Illinois, Urbana-Champaign</h4>
28
28
29
29
<h2>Abstract</h2>
30
30
31
-
<pstyle="text-align: left;">
32
-
Rendering maps 3D scenes to 2D images, yet how continuous changes in camera pose or scene geometry influence these images—and downstream vision models—remains unexplored. We introduce <strong>Abstract Rendering</strong>, which computes <em>provable bounds</em> on all images produced under such 3D variations. These <strong>abstract images</strong> can be propagated through neural networks to certify model behavior under realistic semantic changes, far beyond traditional pixel-level noise models.
31
+
<pstyle="text-align: justify">
32
+
Rendering generates a two-dimensional image from the description of a three-dimensional scene and the camera parameters. To analyze how <em></em>visual models</em>, such as classifiers and pose estimators, and controllers behave with respect to changes in the scene or in the camera parameters, we need to propagate those changes through the rendering process.
33
+
This is the problems of <strong>Abstract Rendering</strong> introduces and tackled in this paper. Abstract rendering computes <em>provable pixel color bounds</em> on all images that can be produced under variations in the scene and in the camera positions. The resulting sets of images are called <strong>abstract images</strong> which can then be propagated through neural networks verification tools such as <ahref="https://github.com/Verified-Intelligence/alpha-beta-CROWN">CROWN</a> to certify visual models under realistic semantic perturbations. For example, we can certify that a ResNet classifier will continue to detect and classify an airplane or a car correctly as the camera moves within a certain range of positions.
33
34
</p><br>
34
35
35
-
<pstyle="text-align: left;">
36
-
Our approach introduces new piecewise-linear abstractions for key rendering operations and supports modern scene representations, including 3D Gaussian Splatting and NeRF. Experiments on classification (ResNet), object detection (YOLO), and pose estimation (GATENet) show that our abstract images provide guaranteed over-approximations with no more than 3% conservativeness error, enabling practical certification for safety-critical vision systems.
36
+
<pstyle="text-align: justify">
37
+
In this peper, we present an abstract rendering framework for scenes represeted by <em><ahref="https://en.wikipedia.org/wiki/Gaussian_splatting">Gaussian splats</a> (3DGS)</em> and <em><ahref="https://en.wikipedia.org/wiki/Neural_radiance_field">NeRFs</a></em>. Our approach is based on computing piecewise-linear relational abstractions for each of the operations appearing in the standard rendering algorithms for 3DGS and NeRFs. By composing these relational abstractions, we get the abstract rendering algorithms that can compute pixel color bounds or abstract images. Experiments on classification (ResNet), object detection (YOLO), and pose estimation (GateNet) show that our abstract images provide guaranteed over-approximations with no more than 3% conservativeness error, enabling practical certification for safety-critical vision systems.
<pstyle="text-align: left;">Lower bound and upper bound images are supposed to contain all images that can be rendered from given range of camera (scene) movement, such as the reference images.</p>
44
+
<pstyle="text-align: justify;">
45
+
An abstract image is a continuous range of images and it can also be seen as a matrix with a range (or interval) of possible colors for each pixel. Here we visualize abstract images by the lower and upper-bound of the pixel colors, that arise from all possible changes in the scene or camera. The camera-pose slider fixes a specific camera position within the scene, while the perturbation-range slider adjusts the extent of camera movement around that position. The reference image represents the rendering from the exact camera pose without any perturbation. Soundness of abstract rendering ensures that the abstract image, represented by the lower and upper bound images, contrains all the possible renderings from the specified range of camera movements.
<h2>Visual Model Verification with Abstract Rendering</h2>
64
59
65
-
<pstyle="text-align: left;">Verify the range of camera movement for which the downstream NN (ResNet classifer or GateNet pose estimator) is certified to work (color in green), and identify the ranges where it may fail (color in red).</p>
60
+
<pstyle="text-align: justify;">
61
+
Abstract rendering can be used to certify the robustness of visual models under 3D semantic uncertainty. For example, we can certify that a ResNet classifier will continue to detect and classify an object correctly as the camera moves around it. More realistically, we can identify the range of camera view angles for which the classifier correctly classifies the object (shown in green) and the range where the classification may be incorrect (shown in red). Similar analysis is performed for pose estimation models such as GateNet.
More details can be found in the paper. Cite this work as:
80
+
</p>
73
81
<prestyle="text-align: left;">
74
-
@inproceedings{jiabstract,
82
+
83
+
@inproceedings{AbstractRendering_Neurips2025,
75
84
title={Abstract Rendering: Certified Rendering Under 3D Semantic Uncertainty},
76
85
author={Ji, Chenxi and Li, Yangge and Zhong, Xiangru and Zhang, Huan and Mitra, Sayan},
77
-
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
86
+
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
87
+
year={2025},
88
+
month={December},
89
+
address={San Diego, CA, USA}
78
90
}
79
91
</pre>
80
92
<br><br>
81
93
82
-
<h2>Acknowledgments and Funding</h2>
94
+
<h2>Acknowledgments</h2>
83
95
84
-
<pstyle="text-align: left;">
96
+
<pstyle="text-align: justify;">
85
97
Chenxi Ji, Yangge Li, and Sayan Mitra are supported by a research grant from The Boeing Company and NSF (FMITF-2525287). Huan Zhang and Xiangru Zhong are supported in part by the AI2050 program at Schmidt Sciences (AI2050 Early Career Fellowship) and NSF (IIS SLES-2331967, CCF FMITF-2525287).
86
-
</p>
87
-
<br>
88
-
89
-
<pstyle="text-align: left">
90
98
We thank Douglas Belgorod and Maya Cheshire for researching and developing applications of Abstract Rendering.</p>
91
99
<br><br>
92
100
93
-
<h2>Reference</h2>
101
+
<!-- <h2>Reference</h2>
94
102
95
103
<p style="text-align: left;">
96
104
[1] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. <i>ACM Trans. Graph.</i>, 42(4):139–1, 2023.
@@ -100,7 +108,7 @@ <h2>Reference</h2>
100
108
</p><br>
101
109
<p style="text-align: left;">
102
110
[3] Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Efficient neural network robustness certification with general activation functions. <i>Advances in neural information processing systems</i>, 31, 2018.
0 commit comments