GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation

Chubin Zhang1,2, Hongliang Song2, Yi Wei1,
Yu Chen2, Jiwen Lu1, Yansong Tang1
1Tsinghua University    2Alibaba Group   
3D Gaussians generated by our GeoLRM method.

Abstract

In this work, we introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory. Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images. This limits these methods to a low-resolution representation and makes it difficult to scale up to the dense views for better quality. GeoLRM tackles these issues by incorporating a novel transformer structure that directly processes 3D points and uses cross-attention mechanisms to effectively integrate image features into 3D representations. We implement this solution through a two-stage pipeline: initially, a lightweight proposal network generates a sparse set of 3D anchor points from the posed image inputs; subsequently, a specialized reconstruction transformer refines the geometry and retrieves textural details. Extensive experimental results demonstrate that GeoLRM significantly outperforms existing models, especially for dense view inputs. We also demonstrate the practical applicability of our model with 3D generation tasks, showcasing its versatility and potential for broader adoption in real-world applications.


GeoLRM generates high quality 3D Gaussians in a fead-forward manner. Notably, the output quality improves as the number of input views increases.


Method

The picture below is a brief summary of our method. The process begins with the transformation of dense tokens into an occupancy grid via a Proposal Transformer, which captures spatial occupancy from hierarchical image features extracted using a combination of a convolutional layer and DINOv2. Sparse tokens representing occupied voxels are further processed through a Reconstruction Transformer that employs self-attention and deformable cross-attention mechanisms to refine geometry and retrieve texture details with 3D to 2D projection. Finally, the refined 3D tokens are converted into 3D Gaussians for real-time rendering.

Qualitative Comparison

We conducted a qualitative analysis comparing our method with several LRM-based baselines, including TripoSR, LGM, CRM, and InstantMesh. The results are shown below. Our method generates accurate geometry and high-quality textures.

Interactable Meshes

BibTeX


@article{zhang2024geolrm,
    title={GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation},
    author={Chubin Zhang and Hongliang Song and Yi Wei and Yu Chen and Jiwen Lu and Yansong Tang},
    journal={arXiv preprint arXiv:2406.15333},
    year={2024}
}