Comparison on MV-ImgNet to upsample low-res GSplats
We compare our methods against baselines using perceptual metrics. Our method, even though it is generic,
consistently produces the best quantitative results. We encourage the reader to inspect the visual results in
the later section, which highlights that the visual quality of our method surpasses the baselines.
Method |
LPIPS ↓ |
NIQE ↓ |
FID ↓ |
IS ↑ |
Instruct-NeRF2NeRF1 |
0.1867 |
8.33 |
32.56 |
10.52 ± 1.06 |
Super-NeRF2 |
0.2204 |
8.84 |
37.54 |
10.40 ± 1.03 |
Pre-hoc Image3 |
0.1524 |
7.65 |
27.04 |
11.27 ± 0.99 |
SuperGaussian (ours) |
0.1290 |
6.80 |
24.32 |
11.69 ± 1.08 |
1 Haque et al., "Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions", ICCV 2023
2 Han et al., "Super-NeRF: View-Consistent Detail Generation for NeRF Super-Resolution", arXiv 2023
3 Our customized baseline is similar to SuperGaussian
except using SOTA image upsampler
Comparison on Blender-Synthetic to upsample low-res RGB
Here, we compare on ×4 upsampling from 200 × 200 to 800 × 800px. We compare our methods against baselines on the
official test set using metrics reported in prior work. Our method produces on-par quantitative results.
Besides, our results yield more generative details, which are not captured by the reference-based metrics. For a
fair comparison against these baselines, we use Neural Radiance Field, i.e., TensoRF, as our 3D representation.
Other baseline results are directly taken from their paper.
Method |
LPIPS ↓ |
PSNR ↑ |
SSIM ↑ |
FastSR-NeRF4 |
0.075 |
30.47 |
0.944 |
NeRF-SR5 |
0.076 |
28.46 |
0.921 |
SuperGaussian (ours) |
0.067 |
28.44 |
0.923 |
4 Lin et al., "FastSR-NeRF: Improving NeRF efficiency on consumer devices with a simple
superresolution pipeline", WACV 2024
5 Wang et al., "NeRF-SR: High-Quality Neural Radiance Fields using Supersampling", MM 2022