Implicit shape representations describe 3D-geometry in a differentiable manner, which is useful for downstream deep learning applications. An interesting property of these methods is their applicability as efficient compression tools.
We experiment with periodic activations to improve key compression-properties of current implicit neural representations. Herein, we specifically compare memory-use and compression speed with the expressive power of the method.
Increasing the latent code dimensionality improves the quality with which the auto-decoder represents shapes at test time. Comparisons against 16- to 320-cubed sparse and dense voxel grids show that our latent representation greatly reduces storage cost at comparable reconstruction quality. Voxel grids of resolutions greater than 128 outperformed our best latent representations.
We compare selected reconstructions from our sinusoidal representation network (SIREN) against DeepSDF, as well as sparse and dense voxel grids. We point out that our model learns high-detail features on the surface with greater detail than DeepSDF.
We are also able to interpolate the latent codes of two shapes to produce a novel shape.
Start Frame
End Frame
There's a lot of excellent work that ours is based on.
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation first showed shape-coded auto-decoders are well-suited to learning SDFs. Implicit Neural Representations with Periodic Activation Functions proposed an activation scheme for sine activations and showed good results when overfitting on single scenes.
We thank Prof. Matthias Niessner for supervising our semester project.
@article{freissmuth2023deep3dcomp,
author = {Freissmuth, Leonard and Wulff, Philipp},
title = {Deep 3D-Shape Compression},
year = {2023},
month = {Mar},
url = {https://philippwulff.github.io/Deep3DComp/}
}