Sailorzzcc commited on
Commit
410b2bc
·
verified ·
1 Parent(s): bb1cc8f

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +16 -16
index.html CHANGED
@@ -22,20 +22,20 @@
22
  <link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
23
  rel="stylesheet">
24
 
25
- <link rel="stylesheet" href="./SEGS/static/css/bulma.min.css">
26
- <link rel="stylesheet" href="./SEGS/static/css/bulma-carousel.min.css">
27
- <link rel="stylesheet" href="./SEGS/static/css/bulma-slider.min.css">
28
- <link rel="stylesheet" href="./SEGS/static/css/fontawesome.all.min.css">
29
  <link rel="stylesheet"
30
  href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
31
- <link rel="stylesheet" href="./SEGS/static/css/index.css">
32
- <link rel="icon" href="./SEGS/static/images/favicon.png">
33
 
34
  <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
35
- <script defer src="./SEGS/static/js/fontawesome.all.min.js"></script>
36
- <script src="./SEGS/static/js/bulma-carousel.min.js"></script>
37
- <script src="./SEGS/static/js/bulma-slider.min.js"></script>
38
- <script src="./SEGS/static/js/index.js"></script>
39
  </head>
40
  <body>
41
 
@@ -133,7 +133,7 @@
133
  <br>
134
  <div class="column content">
135
  <video id="matting-video" autoplay controls muted loop playsinline height="100%">
136
- <source src="./SEGS/static/videos/nvs_results.mp4"
137
  type="video/mp4">
138
  </video>
139
  </div>
@@ -152,7 +152,7 @@
152
  <h2 class="title is-3">Abstract</h2>
153
  <div class="content has-text-justified">
154
  <p>
155
- <img src="./SEGS/static/images/viz.png"
156
  class=""
157
  alt=""/>
158
  3D Gaussian Splatting (3DGS) has demonstrated remarkable effectiveness for novel view synthesis (NVS). However, the 3DGS model tends to overfit when trained with sparse posed views, limiting its generalization ability to novel views. We alleviate the overfitting problem, presenting a Self-Ensembling Gaussian Splatting (SE-GS) approach. Our method encompasses a &Sigma;-model and a &Delta;-model. The &Sigma;-model serves as an ensemble of 3DGS models that generates novel-view images during inference. We achieve the self-ensembling by introducing an uncertainty-aware perturbation strategy at the training state. We complement the &Sigma;-model with the &Delta;-model, which is dynamically perturbed based on the uncertainties of novel-view renderings across different training steps. The perturbation yields diverse 3DGS models without additional training costs.
@@ -172,7 +172,7 @@
172
  <h2 class="title is-3">Method Overview</h2>
173
  </div>
174
  <br>
175
- <img src="./SEGS/static/images/pipeline.png"
176
  class=""
177
  alt=""/>
178
  The perturbed models are derived from the &Delta;-model via an uncertainty-aware perturbation strategy. We store images rendered from pseudo views at different training steps in buffers, from which we compute pixel-level uncertainties. We then perturb the Gaussians overlapping the pixels with high uncertainties, as highlighted as red ellipses. Self-ensembling over the perturbed models is achieved by training a &Sigma;-model with a regularization that penalizes the discrepancies of the &Sigma;-model and the perturbed &Delta;-model. During inference, novel view synthesis is performed using the &Sigma;-model.
@@ -185,7 +185,7 @@
185
  <div class="columns is-centered">
186
  <br>
187
  <video id="matting-video" autoplay controls muted loop playsinline style="width: 70%; height: auto;">
188
- <source src="./SEGS/static/videos/uncerntainty.mp4"
189
  type="video/mp4">
190
  </video>
191
  </div>
@@ -200,7 +200,7 @@
200
  </div>
201
  <br>
202
  <video id="matting-video" autoplay controls muted loop playsinline height="100%">
203
- <source src="./SEGS/static/videos/fern.mp4"
204
  type="video/mp4">
205
  </video>
206
  </div>
@@ -212,7 +212,7 @@
212
  <div class="columns is-centered">
213
  <br>
214
  <video id="matting-video" autoplay controls muted loop playsinline style="width: 70%; height: auto;">
215
- <source src="./SEGS/static/videos/sigma_model_vs_perturbed_model.mp4"
216
  type="video/mp4">
217
  </video>
218
  </div>
 
22
  <link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
23
  rel="stylesheet">
24
 
25
+ <link rel="stylesheet" href="static/css/bulma.min.css">
26
+ <link rel="stylesheet" href="static/css/bulma-carousel.min.css">
27
+ <link rel="stylesheet" href="static/css/bulma-slider.min.css">
28
+ <link rel="stylesheet" href="static/css/fontawesome.all.min.css">
29
  <link rel="stylesheet"
30
  href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
31
+ <link rel="stylesheet" href="static/css/index.css">
32
+ <link rel="icon" href="static/images/favicon.png">
33
 
34
  <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
35
+ <script defer src="static/js/fontawesome.all.min.js"></script>
36
+ <script src="static/js/bulma-carousel.min.js"></script>
37
+ <script src="static/js/bulma-slider.min.js"></script>
38
+ <script src="static/js/index.js"></script>
39
  </head>
40
  <body>
41
 
 
133
  <br>
134
  <div class="column content">
135
  <video id="matting-video" autoplay controls muted loop playsinline height="100%">
136
+ <source src="static/videos/nvs_results.mp4"
137
  type="video/mp4">
138
  </video>
139
  </div>
 
152
  <h2 class="title is-3">Abstract</h2>
153
  <div class="content has-text-justified">
154
  <p>
155
+ <img src="static/images/viz.png"
156
  class=""
157
  alt=""/>
158
  3D Gaussian Splatting (3DGS) has demonstrated remarkable effectiveness for novel view synthesis (NVS). However, the 3DGS model tends to overfit when trained with sparse posed views, limiting its generalization ability to novel views. We alleviate the overfitting problem, presenting a Self-Ensembling Gaussian Splatting (SE-GS) approach. Our method encompasses a &Sigma;-model and a &Delta;-model. The &Sigma;-model serves as an ensemble of 3DGS models that generates novel-view images during inference. We achieve the self-ensembling by introducing an uncertainty-aware perturbation strategy at the training state. We complement the &Sigma;-model with the &Delta;-model, which is dynamically perturbed based on the uncertainties of novel-view renderings across different training steps. The perturbation yields diverse 3DGS models without additional training costs.
 
172
  <h2 class="title is-3">Method Overview</h2>
173
  </div>
174
  <br>
175
+ <img src="static/images/pipeline.png"
176
  class=""
177
  alt=""/>
178
  The perturbed models are derived from the &Delta;-model via an uncertainty-aware perturbation strategy. We store images rendered from pseudo views at different training steps in buffers, from which we compute pixel-level uncertainties. We then perturb the Gaussians overlapping the pixels with high uncertainties, as highlighted as red ellipses. Self-ensembling over the perturbed models is achieved by training a &Sigma;-model with a regularization that penalizes the discrepancies of the &Sigma;-model and the perturbed &Delta;-model. During inference, novel view synthesis is performed using the &Sigma;-model.
 
185
  <div class="columns is-centered">
186
  <br>
187
  <video id="matting-video" autoplay controls muted loop playsinline style="width: 70%; height: auto;">
188
+ <source src="static/videos/uncerntainty.mp4"
189
  type="video/mp4">
190
  </video>
191
  </div>
 
200
  </div>
201
  <br>
202
  <video id="matting-video" autoplay controls muted loop playsinline height="100%">
203
+ <source src="static/videos/fern.mp4"
204
  type="video/mp4">
205
  </video>
206
  </div>
 
212
  <div class="columns is-centered">
213
  <br>
214
  <video id="matting-video" autoplay controls muted loop playsinline style="width: 70%; height: auto;">
215
+ <source src="static/videos/sigma_model_vs_perturbed_model.mp4"
216
  type="video/mp4">
217
  </video>
218
  </div>