You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Rendering generates a two-dimensional image from the description of a three-dimensional scene and the camera parameters. To analyze how <em></em>visual models</em>, such as classifiers and pose estimators, and controllers behave with respect to changes in the scene or in the camera parameters, we need to propagate those changes through the rendering process.
33
-
This is the problems of <strong>Abstract Rendering</strong>introduces and tackled in this paper. Abstract rendering computes <em>provable pixel color bounds</em> on all images that can be produced under variations in the scene and in the camera positions. The resulting sets of images are called <strong>abstract images</strong> which can then be propagated through neural networks verification tools such as <ahref="https://github.com/Verified-Intelligence/alpha-beta-CROWN">CROWN</a> to certify visual models under realistic semantic perturbations. For example, we can certify that a ResNet classifier will continue to detect and classify an airplane or a car correctly as the camera moves within a certain range of positions.
33
+
This is the problems of <strong>Abstract Rendering</strong>introduced and tackled in this paper. Abstract rendering computes <em>provable pixel color bounds</em> on all images that can be produced under variations in the scene and in the camera positions. The resulting sets of images are called <strong>abstract images</strong> which can then be propagated through neural networks verification tools such as <ahref="https://github.com/Verified-Intelligence/alpha-beta-CROWN">CROWN</a> to certify visual models under realistic semantic perturbations. For example, we can certify that a ResNet classifier will continue to detect and classify an airplane or a car correctly as the camera moves within a certain range of positions.
34
34
</p><br>
35
35
36
36
<pstyle="text-align: justify">
37
-
In this peper, we present an abstract rendering framework for scenes represeted by <em><ahref="https://en.wikipedia.org/wiki/Gaussian_splatting">Gaussian splats</a> (3DGS)</em> and <em><ahref="https://en.wikipedia.org/wiki/Neural_radiance_field">NeRFs</a></em>. Our approach is based on computing piecewise-linear relational abstractions for each of the operations appearing in the standard rendering algorithms for 3DGS and NeRFs. By composing these relational abstractions, we get the abstract rendering algorithms that can compute pixel color bounds or abstract images. Experiments on classification (ResNet), object detection (YOLO), and pose estimation (GateNet) show that our abstract images provide guaranteed over-approximations with no more than 3% conservativeness error, enabling practical certification for safety-critical vision systems.
37
+
In this peper, we present an abstract rendering framework for scenes represeted by <em><ahref="https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/">Gaussian splats</a> (3DGS)</em> and <em><ahref="https://www.matthewtancik.com/nerf">NeRFs</a></em>. Our approach is based on computing piecewise-linear relational abstractions for each of the operations appearing in the standard rendering algorithms for 3DGS and NeRFs. By composing these relational abstractions, we get the abstract rendering algorithms that can compute pixel color bounds or abstract images. Experiments on classification (ResNet), object detection (YOLO), and pose estimation (GateNet) show that our abstract images provide guaranteed over-approximations with no more than 3% conservativeness error, enabling practical certification for safety-critical vision systems.
0 commit comments