TL;DR: we propose EcoSplat, the first efficiency-controllable feed-forward 3DGS framework that adaptively predicts the 3D representation for any given target primitive count at inference time. Extensive experiments across multiple dense-view settings show that EcoSplat is robust and outperforms state-of-the-art methods under strict primitive-count constraints, making it well-suited for flexible downstream rendering tasks.
EcoSplat Overview. EcoSplat is trained in two stages: Pixel-aligned Gaussian Training (PGT) (Sec. 3.2) and Importance-aware Gaussian Finetuning (IGF) (Sec. 3.3). During IGF, the combination of the importance-aware opacity loss and the Progressive Learning on Gaussian Compaction (PLGC) encourages EcoSplat to suppress the opacities of less important Gaussians. At inference, it adaptively satisfies an arbitrary user-specified primitive count and produces the optimal Gaussians in a feed-forward manner (Sec. 3.4).