Revisiting Layer Normalization for Point Cloud Test Time Adaptation
Moslem Yazdanpanah · Ali Bahri · Mehrdad Noori · Sahar Dastani · Samuel Barbeau · David OSOWIECHI · Gustavo Vargas Hakim · Ismail Ayed · Christian Desrosiers
Abstract
We analyze Layer Normalization (LN) from a domain (batch) perspective and explain why BatchNorm-style test-time fixes often fail on Transformer backbones. As feature dimension and batch size grow, the per-feature batch marginals after LN's pre-affine step concentrate at mean $\approx 0$ and variance $\approx 1$, making cross-batch re-standardization unnecessary and often harmful. This yields a simple rule: keep the pre-affine LN intact and adjust only the post-affine mean and gain. We instantiate this with \textbf{LN-TTA}, a backpropagation-free and source-free, test-time adaptation that performs a single forward pass and uniformly reparameterizes each LN layer. On three corrupted 3D point-cloud suites (ScanObjectNN-C, ModelNet40-C, ShapeNet-C), LN-TTA improves over Source-Only by $+12.35$, $+15.58$, and $+3.03$ points, surpasses backpropagation baselines (e.g., TENT), and sustains up to $93$ samples/s, on average $39\times$ faster and $5\times$ more memory-efficient than the next-best backprop-free method. The implementation will be publicly available.
Successful Page Load