Skip to content

Commit 689528c

Browse files
committed
fix minor grammar, update http links
1 parent 20c098f commit 689528c

1 file changed

Lines changed: 4 additions & 4 deletions

File tree

index.html

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,13 @@
1313
<h1>Abstract Rendering: Certified Rendering Under 3D
1414
Semantic Uncertainty</h1>
1515
<p class="spotlight">NeurIPS 2025 spotlight</p>
16-
<h3 class="author">Chenxi Ji<sup>*</sup>, Yangge Li<sup>*</sup>, Xiangru Zhong<sup>*</sup>, Huan Zhang, <a href="https://mitras.ece.illinois.edu/">Sayan Mitra</a></h3>
16+
<h3 class="author">Chenxi Ji<sup>*</sup>, Yangge Li<sup>*</sup>, Xiangru Zhong<sup>*</sup>, <a href="https://www.huan-zhang.com/">Huan Zhang</a>, <a href="https://mitras.ece.illinois.edu/">Sayan Mitra</a></h3>
1717
<h4 class="institute">University of Illinois, Urbana-Champaign</h4>
1818

1919

2020
<p>
2121
<a href="assets/pdf/AbstractRendering_Neurips2025.pdf" class="button">Paper</a>
22-
<a href="https://github.com/IllinoisReliableAutonomyGroup/Abstract-Rendering.git" class="button">Code</a>
22+
<a href="https://github.com/IllinoisReliableAutonomyGroup/Abstract-Rendering.git" class="button">Code (Coming Soon)</a>
2323
</p>
2424
<br>
2525

@@ -30,11 +30,11 @@ <h2>Abstract</h2>
3030

3131
<p style="text-align: justify">
3232
Rendering generates a two-dimensional image from the description of a three-dimensional scene and the camera parameters. To analyze how <em></em>visual models</em>, such as classifiers and pose estimators, and controllers behave with respect to changes in the scene or in the camera parameters, we need to propagate those changes through the rendering process.
33-
This is the problems of <strong>Abstract Rendering</strong> introduces and tackled in this paper. Abstract rendering computes <em>provable pixel color bounds</em> on all images that can be produced under variations in the scene and in the camera positions. The resulting sets of images are called <strong>abstract images</strong> which can then be propagated through neural networks verification tools such as <a href="https://github.com/Verified-Intelligence/alpha-beta-CROWN">CROWN</a> to certify visual models under realistic semantic perturbations. For example, we can certify that a ResNet classifier will continue to detect and classify an airplane or a car correctly as the camera moves within a certain range of positions.
33+
This is the problems of <strong>Abstract Rendering</strong> introduced and tackled in this paper. Abstract rendering computes <em>provable pixel color bounds</em> on all images that can be produced under variations in the scene and in the camera positions. The resulting sets of images are called <strong>abstract images</strong> which can then be propagated through neural networks verification tools such as <a href="https://github.com/Verified-Intelligence/alpha-beta-CROWN">CROWN</a> to certify visual models under realistic semantic perturbations. For example, we can certify that a ResNet classifier will continue to detect and classify an airplane or a car correctly as the camera moves within a certain range of positions.
3434
</p><br>
3535

3636
<p style="text-align: justify">
37-
In this peper, we present an abstract rendering framework for scenes represeted by <em><a href="https://en.wikipedia.org/wiki/Gaussian_splatting">Gaussian splats</a> (3DGS)</em> and <em><a href="https://en.wikipedia.org/wiki/Neural_radiance_field">NeRFs</a></em>. Our approach is based on computing piecewise-linear relational abstractions for each of the operations appearing in the standard rendering algorithms for 3DGS and NeRFs. By composing these relational abstractions, we get the abstract rendering algorithms that can compute pixel color bounds or abstract images. Experiments on classification (ResNet), object detection (YOLO), and pose estimation (GateNet) show that our abstract images provide guaranteed over-approximations with no more than 3% conservativeness error, enabling practical certification for safety-critical vision systems.
37+
In this peper, we present an abstract rendering framework for scenes represeted by <em><a href="https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/">Gaussian splats</a> (3DGS)</em> and <em><a href="https://www.matthewtancik.com/nerf">NeRFs</a></em>. Our approach is based on computing piecewise-linear relational abstractions for each of the operations appearing in the standard rendering algorithms for 3DGS and NeRFs. By composing these relational abstractions, we get the abstract rendering algorithms that can compute pixel color bounds or abstract images. Experiments on classification (ResNet), object detection (YOLO), and pose estimation (GateNet) show that our abstract images provide guaranteed over-approximations with no more than 3% conservativeness error, enabling practical certification for safety-critical vision systems.
3838
</p><br><br>
3939

4040

0 commit comments

Comments
 (0)