Skip to content

Commit c95160c

Browse files
author
Dmitrii Tarasov
committed
add key contributions
1 parent b8234f7 commit c95160c

1 file changed

Lines changed: 115 additions & 53 deletions

File tree

project/index.html

Lines changed: 115 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,53 @@ <h2 class="title is-3">Abstract</h2>
132132
</section>
133133
<!-- End paper abstract -->
134134

135-
<!-- Прописать явно контрибушны: -->
135+
<!--
136+
<section class="section">
137+
<div class="container is-max-desktop">
138+
<div class="columns is-centered">
139+
<div class="column is-four-fifths">
140+
<h2 class="title is-3 has-text-centered">Overview</h2>
141+
<div class="content has-text-centered">
142+
<img src="./static/images/teaser.png" alt="teaser" class="is-rounded">
143+
<p class="caption has-text-centered mb-6">
144+
<b>Figure 1. Overview.</b> Our method enables direct interpretation of vision encoder features through image reconstruction, revealing how different architectures internally represent visual information. We demonstrate this by (a) comparing feature informativeness between model families, (b) ranking encoders by their feature representation quality, and (c) showing how controlled feature space manipulations produce predictable image changes.
145+
</p>
146+
</div>
147+
</div>
148+
</div>
149+
</div>
150+
</section> -->
151+
152+
<!-- Key Contributions -->
153+
<section class="section">
154+
<div class="container is-max-desktop">
155+
<div class="columns is-centered">
156+
<div class="column is-four-fifths">
157+
<h2 class="title is-3 has-text-centered">Key Contributions</h2>
158+
<div class="content">
159+
<div class="box">
160+
<div class="content">
161+
<h4>🔍 Novel Feature Analysis Method</h4>
162+
<p>We introduce a new approach to interpret vision encoder features through direct image reconstruction, providing insights into how these models internally represent visual information.</p>
163+
</div>
164+
</div>
165+
<div class="box">
166+
<div class="content">
167+
<h4>📊 Model Family Comparison</h4>
168+
<p>We reveal that encoders pre-trained on image-based tasks retain significantly more image information compared to those trained on contrastive learning tasks, demonstrated through our SigLIP vs SigLIP2 analysis.</p>
169+
</div>
170+
</div>
171+
<div class="box">
172+
<div class="content">
173+
<h4>🎨 Feature Space Control</h4>
174+
<p>We demonstrate that orthogonal rotations in feature space control color encoding, enabling predictable image manipulations and revealing the structured nature of the feature representations.</p>
175+
</div>
176+
</div>
177+
</div>
178+
</div>
179+
</div>
180+
</div>
181+
</section>
136182

137183
<!-- (1) interpretability metric -->
138184
<!-- Текстовое объяснение -->
@@ -144,18 +190,32 @@ <h2 class="title is-3">Abstract</h2>
144190
<div class="container is-max-desktop">
145191
<div class="columns is-centered">
146192
<div class="column is-four-fifths">
147-
<h2 class="title is-3 has-text-centered">Reconstruct images from feature space</h2>
193+
<h2 class="title is-3 has-text-centered">Method</h2>
194+
195+
<!-- Method Overview -->
148196
<div class="content">
197+
<h3 class="title is-4">Feature Reconstruction Framework</h3>
198+
<p class="has-text-justified">
199+
Our method enables direct interpretation of vision encoder features through image reconstruction. We train a decoder network that learns to reconstruct original images from their feature representations, providing a quantitative measure of feature informativeness.
200+
</p>
149201
<div class="has-text-centered">
150202
<img src="./static/images/features_reconstruction.drawio.png" alt="features_reconstruction" class="is-rounded">
151203
<p class="caption has-text-centered mb-6">
152-
<b>Figure 1. Image reconstructor training.</b> For pretrained model we train a reconstructor model that
153-
restores the image from the feature space.
204+
<b>Figure 1.</b> Our reconstruction framework trains a decoder to restore images from feature representations, enabling direct assessment of feature informativeness.
154205
</p>
206+
</div>
207+
</div>
208+
209+
<!-- Comparative Analysis -->
210+
<div class="content mt-6">
211+
<h3 class="title is-4">Comparative Analysis: SigLIP vs SigLIP2</h3>
212+
<p class="has-text-justified">
213+
We compare two related model families that differ only in their training objective: SigLIP (trained with contrastive learning) and SigLIP2 (trained on image-based tasks). This controlled comparison reveals how training objectives influence feature representations.
214+
</p>
215+
<div class="has-text-centered">
155216
<img src="./static/images/reconstruction_metrics.jpg" alt="reconstruction_metrics" class="is-rounded">
156217
<p class="caption has-text-centered mb-6">
157-
<b>Figure 2. Reconstruction Metrics.</b> We show the results of the reconstruction for SigLip and SigLip2
158-
for different image resultions.
218+
<b>Figure 2.</b> Reconstruction quality comparison between SigLIP and SigLIP2 across different image resolutions demonstrates that image-based training leads to more informative feature representations.
159219
</p>
160220
</div>
161221
</div>
@@ -164,114 +224,116 @@ <h2 class="title is-3 has-text-centered">Reconstruct images from feature space</
164224
</div>
165225
</section>
166226

167-
168-
<!-- (2) Feature-space transformations -->
169-
<!-- Текстовое объяснение -->
170-
<!-- Визуализация фреймворка: обобщил оператор в пр-ве картинок и в пр-ве фичей -->
171-
<!-- Примеры работы с RGB -->
172-
<!-- Примеры работы с отключением одного канала (ожелтением) -->
173-
<!-- Примеры Спектра такой м-цы, показать, что только небольшое кол-во каналов меняется -->
174-
<!-- -->
175-
227+
<!-- Feature Space Analysis -->
176228
<section class="section">
177229
<div class="container is-max-desktop">
178230
<div class="columns is-centered">
179231
<div class="column is-four-fifths">
180-
<h2 class="title is-3 has-text-centered">Feature-space transformations. Q matrix Calculation and Application.</h2>
232+
<h2 class="title is-3 has-text-centered">Feature Space Analysis</h2>
233+
234+
<!-- Q Matrix Framework -->
181235
<div class="content">
236+
<h3 class="title is-4">Q Matrix: A Tool for Feature Manipulation</h3>
237+
<p class="has-text-justified">
238+
We introduce the Q matrix framework that enables controlled manipulation of feature representations. This orthogonal transformation matrix is learned to perform specific image manipulations, revealing how visual attributes are encoded in the feature space.
239+
</p>
182240
<div class="columns is-centered has-vertical-divider">
183241
<div class="column is-half">
184242
<img src="./static/images/features_reconstruction_manipulation_train_Q.drawio.png" alt="features_reconstruction_manipulation_train_Q" class="is-rounded mb-4">
185243
<p class="caption has-text-centered mb-6">
186-
<b>Figure 3. Feature-space transformations. Q matrix Calculation.</b> We then calculate Q matrix for feature-space manupulation.
244+
<b>Figure 3.</b> Q matrix calculation process learns the transformation needed for specific image manipulations.
187245
</p>
188246
</div>
189247
<div style="border-left: 4px solid gray;margin: 50px;"></div>
190248
<div class="column is-half">
191249
<img src="./static/images/features_reconstruction_manipulation_eval_Q.drawio.png" alt="features_reconstruction_manipulation_eval_Q" class="is-rounded mb-4">
192250
<p class="caption has-text-centered mb-6">
193-
<b>Figure 4. Feature-space transformations. Q matrix Application.</b> After Q matrix is calculated, we apply it to the feature space. For each patch embedding.
251+
<b>Figure 4.</b> Application of Q matrix to feature embeddings enables controlled image manipulation.
194252
</p>
195253
</div>
196254
</div>
197255
</div>
198-
</div>
199-
</div>
200-
</div>
201-
</section>
202256

203-
<section class="section">
204-
<div class="container is-max-desktop">
205-
<div class="columns is-centered">
206-
<div class="column is-four-fifths">
207-
<h2 class="title is-3 has-text-centered">Feature-space transformations. Color Swap Examples.</h2>
208-
<div class="content">
257+
<!-- Color Manipulation Results -->
258+
<div class="content mt-6">
259+
<h3 class="title is-4">Color Manipulation Studies</h3>
260+
<p class="has-text-justified">
261+
Through our Q matrix framework, we demonstrate precise control over color attributes in the feature space. Our experiments reveal that color information is encoded through orthogonal rotations rather than spatial transformations.
262+
</p>
263+
264+
<!-- Color Swap -->
265+
<h4 class="title is-5 mt-4">Red-Blue Channel Swap</h4>
209266
<div class="columns is-centered">
210267
<div class="column is-half">
211268
<img src="./static/images/rb_swap.png" alt="rb_swap" class="is-rounded">
212269
<p class="caption has-text-centered mb-6">
213-
<b>Figure 5. Red-blue channel swap samples.</b>
270+
<b>Figure 5.</b> Red-blue channel swap demonstrates precise control over color channels in feature space.
214271
</p>
215272
</div>
216273
<div class="column is-half">
217274
<img src="./static/images/color_swap_all_eigen_values.png" alt="color_swap_all_eigen_values" class="is-rounded">
218275
<p class="caption has-text-centered mb-6">
219-
<b>Figure 6. Eigenvalues for red-blue channel swap matrix.</b> Majority of eigenvalues are close to 1, which means that the transformation is close to an identity matrix. While the other cluster of eigenvalues are close to -1, which means that for these channels direction is changed to the opposite.
276+
<b>Figure 6.</b> Eigenvalue analysis reveals that color transformations affect only specific feature dimensions while preserving others.
220277
</p>
221278
</div>
222279
</div>
223-
</div>
224-
</div>
225-
</div>
226-
</div>
227-
</section>
228280

229-
<section class="section">
230-
<div class="container is-max-desktop">
231-
<div class="columns is-centered">
232-
<div class="column is-four-fifths">
233-
<h2 class="title is-3 has-text-centered">Feature-space transformations. Blue Channel Suppression.</h2>
234-
<div class="content">
281+
<!-- Blue Suppression -->
282+
<h4 class="title is-5 mt-4">Blue Channel Suppression</h4>
235283
<div class="columns is-centered">
236284
<div class="column is-half">
237285
<img src="./static/images/b_suppression_all_transformations.png" alt="b_suppression_all_transformations" class="is-rounded">
238286
<p class="caption has-text-centered mb-6">
239-
<b>Figure 7. Blue channel suppression samples.</b>
287+
<b>Figure 7.</b> Selective suppression of the blue channel demonstrates fine-grained control over color attributes.
240288
</p>
241289
</div>
242290
<div class="column is-half">
243291
<img src="./static/images/b_suppression_all_eigen_values.png" alt="b_suppression_all_eigen_values" class="is-rounded">
244292
<p class="caption has-text-centered mb-6">
245-
<b>Figure 8. Eigenvalues for blue channel suppression matrix.</b>
293+
<b>Figure 8.</b> Eigenvalue distribution for blue suppression shows targeted modification of specific feature dimensions.
246294
</p>
247295
</div>
248296
</div>
297+
298+
<!-- Colorization -->
299+
<h4 class="title is-5 mt-4">Image Colorization</h4>
300+
<div class="has-text-centered">
301+
<img src="./static/images/colorized_examples.png" alt="colorization_all_transformations" class="is-rounded">
302+
<p class="caption has-text-centered mb-6">
303+
<b>Figure 9.</b> Our method enables controlled colorization through feature space manipulation, demonstrating the structured nature of color encoding.
304+
</p>
305+
</div>
249306
</div>
250307
</div>
251308
</div>
252309
</div>
253310
</section>
254311

312+
<!-- Conclusion -->
255313
<section class="section">
256314
<div class="container is-max-desktop">
257315
<div class="columns is-centered">
258316
<div class="column is-four-fifths">
259-
<h2 class="title is-3 has-text-centered">Feature-space transformations. Colorization.</h2>
260-
<div class="content">
261-
<div class="has-text-centered">
262-
<img src="./static/images/colorized_examples.png" alt="colorization_all_transformations" class="is-rounded">
263-
<p class="caption has-text-centered mb-6">
264-
<b>Figure 9. Colorization samples.</b>
265-
</p>
266-
</div>
317+
<h2 class="title is-3 has-text-centered">Conclusion</h2>
318+
<div class="content has-text-justified">
319+
<p>
320+
Our work introduces a novel approach to understanding vision encoder features through image reconstruction. We demonstrate that:
321+
</p>
322+
<ul>
323+
<li>Training objectives significantly impact how models internally represent visual information</li>
324+
<li>Image-based pre-training leads to more informative feature representations compared to contrastive learning</li>
325+
<li>Color information is encoded through orthogonal rotations in feature space</li>
326+
<li>Our method provides a general framework for analyzing any vision encoder's feature representations</li>
327+
</ul>
328+
<p>
329+
These findings have important implications for model design and provide new tools for understanding and controlling vision encoder behavior. Our approach opens new avenues for feature analysis and manipulation in vision models.
330+
</p>
267331
</div>
268332
</div>
269333
</div>
270334
</div>
271335
</section>
272336

273-
274-
275337
<!-- TODO: add citation -->
276338
<!--BibTex citation -->
277339
<section class="section" id="BibTeX">

0 commit comments

Comments
 (0)