Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
Signed-off-by: hg-chung <107260735+hg-chung@users.noreply.github.com>
  • Loading branch information
hg-chung authored Apr 5, 2024
1 parent d6a3014 commit 1fa6917
Showing 1 changed file with 8 additions and 23 deletions.
31 changes: 8 additions & 23 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,12 @@ <h1 class="title is-1 publication-title">Differentiable Point-based
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
We present differentiable point-based inverse rendering, DPIR, an analysis-by-synthesis method that processes images captured under diverse illuminations to estimate shape and spatially-varying BRDF. To this end, we adopt point-based rendering, eliminating the need for multiple samplings per ray, typical of volumetric rendering, thus significantly enhancing the speed of inverse rendering. To realize this idea, we devise a hybrid point-volumetric representation for geometry and a regularized basis-BRDF representation for reflectance. The hybrid geometric representation enables fast rendering through point-based splatting while retaining the geometric details and stability inherent to SDF-based representations. The regularized basis-BRDF mitigates the ill-posedness of inverse rendering stemming from limited light-view angular samples. We also propose an efficient shadow detection method using point-based shadow map rendering. Our extensive evaluations demonstrate that DPIR outperforms prior works in terms of reconstruction accuracy, computational efficiency, and memory footprint. Furthermore, our explicit point-based representation and rendering enables intuitive geometry and reflectance editing.
We present differentiable point-based inverse rendering, DPIR, an analysis-by-synthesis method that processes images captured under diverse illuminations to estimate shape and spatially-varying BRDF.
To this end, we adopt point-based rendering, eliminating the need for multiple samplings per ray, typical of volumetric rendering, thus significantly enhancing the speed of inverse rendering.
To realize this idea, we devise a hybrid point-volumetric representation for geometry and a regularized basis-BRDF representation for reflectance.
The hybrid geometric representation enables fast rendering through point-based splatting while retaining the geometric details and stability inherent to SDF-based representations.
The regularized basis-BRDF mitigates the ill-posedness of inverse rendering stemming from limited light-view angular samples. We also propose an efficient shadow detection method using point-based shadow map rendering.
Our extensive evaluations demonstrate that DPIR outperforms prior works in terms of reconstruction accuracy, computational efficiency, and memory footprint. Furthermore, our explicit point-based representation and rendering enables intuitive geometry and reflectance editing.
</p>
</div>
</div>
Expand All @@ -136,33 +141,13 @@ <h2 class="title is-3">Abstract</h2>
<h2 class="title is-3">Method</h2>
<img src="static/images/intro.png" alt="MY ALT TEXT"/>
<h2 class="subtitle has-text-centered">
Overview of Differentiable forward rendering in DPIR
</div>
</div>
</div>
</div>
</section>

<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-full-width">
<h2 class="title is-3">Neural Spectro-polarimetric Field (NeSpoF)</h2>
<div class="column is-centered has-text-centered"><p>$\mathbf{s}, \sigma = F_\Theta(x,y,z,\theta, \phi, \lambda)$</p></div>

<img src="./static/images/intro.PNG"
class="interpolation-image"
alt="Network architecture."
width="100%"/>
<p>
(a) For each 3D point, its position is used as a query for the diffuse-albedo MLP $\Theta_d$, SDF MLP $\Theta_\text{SDF}$, and specular-basis coefficient MLP $\Theta_c$.
The specular-basis BRDF MLP $\Theta_s$ models specular-basis reflectance, given the incident and outgoing directions $\boldsymbol{\omega_{i}}$ and $\boldsymbol{\omega_{o}}$.
The point-based shadow renderer estimates the point visibility from a light source per each image. By using the diffuse albedo, normals, specular reflectance, and visibility, we compute the radiance for each point.
(b) The radiance is then projected onto a camera plane to render the pixel color through splatting-based differentiable forward rendering.
</p>
(b) The radiance is then projected onto a camera plane to render the pixel color through splatting-based differentiable forward rendering.
</div>
</div>
<!--/ Method. -->
</div>
</div>
</section>

Expand Down

0 comments on commit 1fa6917

Please sign in to comment.