Skip to content

Commit ccb51b6

Browse files
committed
update
1 parent d4abc1b commit ccb51b6

2 files changed

Lines changed: 40 additions & 7 deletions

File tree

index.html

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,17 +12,22 @@
1212
<div class="container" style="text-align: center;">
1313
<h1>Abstract Rendering: Certified Rendering Under 3D
1414
Semantic Uncertainty</h1>
15-
<h3>Chenxi Ji<sup>*</sup>, Yangge Li<sup>*</sup>, Xiangru Zhong<sup>*</sup>, Huan Zhang, Sayan Mitra</h3>
16-
<h4>University of Illinois, Urbana-Champaign</h4>
15+
<p class="spotlight">NeurIPS 2025 spotlight</p>
16+
<h3 class="author">Chenxi Ji<sup>*</sup>, Yangge Li<sup>*</sup>, Xiangru Zhong<sup>*</sup>, Huan Zhang, Sayan Mitra</h3>
17+
<h4 class="institute">University of Illinois, Urbana-Champaign</h4>
18+
1719

1820
<p>
1921
<a href="assets/pdf/AbstractRendering_Neurips2025.pdf">[Paper]</a> |
2022
<a href="https://github.com/IllinoisReliableAutonomyGroup/Abstract-Rendering.git">[Code]</a>
2123
</p>
24+
<br>
2225

2326
<img src="assets/images/pipeline.png" class="teaser" alt="Teaser Image" style="width: 100%;">
27+
<br><br>
2428

2529
<h2>Abstract</h2>
30+
2631
<p style="text-align: left;">
2732
Rendering produces 2D images from 3D scene representations, yet how continuous variations in camera pose and scenes influence these images—and, consequently, downstream visual models—remains underexplored. We introduce <strong>Abstract Rendering</strong>, a framework that computes provable bounds on all images rendered under continuously varying camera poses and scenes. The resulting abstract image, expressed as a set of constraints over the image matrix, enables rigorous uncertainty propagation through downstream neural networks and thereby supports certification of model behavior under realistic 3D semantic perturbations, far beyond traditional pixel-level noise models.
2833
</p><br>
@@ -33,46 +38,56 @@ <h2>Abstract</h2>
3338

3439
<p style="text-align: left;">
3540
Our computed abstract images achieve up to <strong>3% over-approximation error</strong> compared to sampling results (baseline). Through experiments on classification (ResNet), object detection (YOLO), and pose estimation (GATENet) tasks, we demonstrate that abstract rendering enables formal certification of downstream models under realistic 3D variations—an essential step toward safety-critical vision systems.
36-
</p><br>
41+
</p><br><br>
3742

3843
<h2>SlideVideo</h2>
44+
<br>
3945
<div class="slidevideo-container">
4046
<video class="wide-video" controls>
4147
<source src="assets/videos/output_small.mp4" type="video/mp4">
4248
Your browser does not support the video tag.
4349
</video>
4450
</div>
51+
<br><br>
4552

4653
<h2>Abstract Images</h2>
54+
4755
<p style="text-align: left;">Lower bound and upper bound images are supposed to contain all images that can be rendered from given range of camera (scene) movement, like reference images.</p>
4856

4957
<video class="wide-video" controls>
5058
<source src="assets/videos/AR_vis_airplane_grey.mp4" type="video/mp4">
5159
</video><br>
5260
<img src="assets/images/abs_result.png" class="teaser" alt="Teaser Image" style="width: 100%;"><br>
53-
61+
<br>
62+
5463
<h2>Downstream NN Verification</h2>
64+
5565
<p style="text-align: left;">Verified range of camera movement for which the downstream NN (ResNet classifer or GateNet pose estimator) is certified to work (color in green), and identify the ranges where it may fail (color in red).</p>
5666

5767
<img src="assets/images/classification result.png" class="teaser" alt="Teaser Image" style="width: 100%;">
5868
<img src="assets/images/pose estimator result.png" class="teaser" alt="Teaser Image" style="width: 100%;">
59-
<br>
69+
<br><br>
6070

6171
<h2>BibTeX</h2>
72+
6273
<pre style="text-align: left;">
6374
@inproceedings{jiabstract,
6475
title={Abstract Rendering: Certified Rendering Under 3D Semantic Uncertainty},
6576
author={Ji, Chenxi and Li, Yangge and Zhong, Xiangru and Zhang, Huan and Mitra, Sayan},
6677
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
6778
}
68-
</pre><br>
79+
</pre>
80+
<br><br>
6981

7082
<h2>Acknowledgments and Funding</h2>
83+
7184
<p style="text-align: left;">
7285
Chenxi Ji, Yangge Li, and Sayan Mitra are supported by a research grant from The Boeing Company and NSF (FMITF-2525287). Huan Zhang and Xiangru Zhong are supported in part by the AI2050 program at Schmidt Sciences (AI2050 Early Career Fellowship) and NSF (IIS SLES-2331967, CCF FMITF-2525287).
73-
</p>
86+
</p>
87+
<br><br>
7488

7589
<h2>Reference</h2>
90+
7691
<p style="text-align: left;">
7792
[1] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. <i>ACM Trans. Graph.</i>, 42(4):139–1, 2023.
7893
</p><br>

style.css

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,3 +108,21 @@
108108
color: #777;
109109
font-size: 0.9rem;
110110
}
111+
112+
.author {
113+
font-size: 1.5rem; /* Increase font size */
114+
font-weight: normal; /* Ensure it's not bold */
115+
margin: 10px 0;
116+
}
117+
118+
.institute {
119+
font-size: 1.2rem; /* Slightly smaller than author */
120+
font-weight: normal; /* Ensure it's not bold */
121+
margin: 5px 0;
122+
}
123+
124+
.spotlight {
125+
font-size: 1.8rem; /* Increase font size */
126+
font-weight: normal; /* Ensure it's not bold */
127+
margin: 10px 0;
128+
}

0 commit comments

Comments
 (0)