Venue
GMP 2026
Abstract
We introduce Geometric Multigrid Neural Networks (GMNN), a novel network structure for geometric deep learning on point clouds and surfaces. Convolutional neural networks face a common challenge: how can relevant features be communicated over longer distances? Our architecture facilitates long-distance communication with Geometric Multigrid Convolution (GMC) blocks, which apply convolutions in parallel to features defined on each level of a multigrid representation of the surface, and enable communication all the way up and down the hierarchy. We observe two major structural advantages of such a network: First, because each GMC operates on all levels of the multigrid hierarchy, even early stages can make use of coarse-scale information and receptive field grows rapidly with depth. Second, networks built with this backbone have the freedom to route information between different scales, including in ways not possible for other architectures. Because of these advantages, we find that a GMNN can combine the fast convergence of a shallow network with the greater expressiveness of a deeper, larger network. We build a GMNN from the components of a state-of-the-art U-Net, and find that on real tasks it can match or exceed the accuracy of the base network while using fewer epochs and roughly half the parameter count.
Tags
Cite
@article{campolattaro2026gmnn,
author = {Campolattaro, Jackson and Wiersma, Ruben and Hildebrandt, Klaus},
title={Geometric Multigrid Neural Networks},
year = {2026},
publisher = {Elsevier},
journal = {Elsevier Computer Aided Geometric Design},
}