NeurIPS 2025 Spotlight

HypLoRA: Hyperbolic Fine Tuning
for Large Language Models

Geometry guided rank reduced adaptation on the hyperbolic manifold

Menglin YangHKUST(GZ) & HKUST · Ram Samarth B BIISc Bangalore · Aosong FengYale University · Bo XiongStanford University · Jiahong LiuCUHK · Irwin KingCUHK · Rex YingYale University
HypLoRA overview of token frequency trend and token norm relation
Token embeddings exhibit hierarchical geometry: frequent (abstract) tokens cluster near the origin while rare (specific) tokens sit farther away.

TL;DR We discover that LLM token embeddings have strong hyperbolic structure. Building on this, we propose HypLoRA, a parameter efficient adapter that performs rank reduced adaptation directly on the hyperbolic manifold, consistently improving reasoning performance over standard LoRA.

Motivation

Most LLM adaptation pipelines operate in Euclidean space by default, yet our empirical analysis reveals a fundamentally different geometric story. We find that token frequencies follow a power law distribution, which is a hallmark of hierarchical data, and that frequent abstract tokens consistently sit closer to the origin while rare specific tokens are positioned farther away. At the prompt level, token spaces exhibit low δ hyperbolicity, indicating an underlying tree shaped organization rather than a flat Euclidean structure.

These observations motivate a natural question: if the geometry of token embeddings is already hierarchical, should adaptation modules explicitly preserve and exploit this structure rather than ignoring it? HypLoRA is our answer to this question.

Key Information

The two figures below summarize the core empirical signals that motivate manifold based adaptation in HypLoRA.

Prompt level hyperbolicity measurements across datasets
Prompt Level Hyperbolicity
Across instruction datasets, prompts exhibit low δ hyperbolicity, indicating tree shaped geometry. This supports modeling token relationships in a curved space rather than assuming flat Euclidean structure.
Token frequency and embedding norm statistics
Frequency Norm Statistics
Token frequency follows a power law trend, and frequent abstract tokens appear closer to the origin while rarer specific tokens are farther away. This radial organization aligns with hierarchical encoding in hyperbolic space.
Method
Method Overview

HypLoRA augments a standard LoRA update with a geometry guided branch designed for hierarchical token structure. The key idea is to keep the base model interface unchanged in Euclidean space while computing an additional rank reduced correction in hyperbolic space, then mapping it back for seamless integration.

Concretely, each adapted layer follows a three step flow: (1) project Euclidean hidden states to the Lorentz manifold, (2) apply a trainable rank reduced transform directly on manifold coordinates (LLR), and (3) map the result back and combine it with the original Euclidean pathway. This yields an adapter that preserves parameter efficiency while better respecting curved geometry during training.

Core Idea

Instead of adapting only in Euclidean space, HypLoRA introduces a hyperbolic branch. Input tokens are projected onto the Lorentz manifold, adapted via a direct rank reduced transform, and projected back:

$$z_E = W x_E + \Pi^K_{\log}\!\bigl(\mathrm{LLR}(BA,\;\Pi^K_{\exp}(x_E))\bigr)$$
LLR = Lorentz rank reduced transform on manifold representations
Why Not Naive Tangent Space Chaining?

A naive sequence of repeated log/exp mappings can cancel out geometric effects and collapse toward Euclidean behavior. HypLoRA avoids this by adapting directly on manifold coordinates before projecting back.

Design at a Glance
Geometry
Lorentz Model
Adapter
Direct LLR
Parameters
Rank Reduced A, B
Curvature
Learnable
Goal
Preserve Hierarchy
Experiments
Arithmetic Reasoning

Training on Math10K, evaluated on GSM8K, AQuA, MAWPS, and SVAMP.

Base ModelMethodParams %MAWPSSVAMPGSM8KAQuAW.Avg
LLaMA3-8BLoRA0.7092.778.970.830.471.9
HypLoRA0.7091.680.574.034.274.2
Gemma3-4BLoRA1.0490.877.372.350.873.7
HypLoRA1.0488.283.976.153.277.8
Qwen2.5-7BLoRA0.7190.884.478.668.180.8
HypLoRA0.7191.292.287.971.688.3
Commonsense Reasoning

Training on Commonsense170K, evaluated on 8 benchmarks.

Base ModelMethod%BoolQPIQASIQAHellaWinoARC-eARC-cOBQAAvg
LLaMA3-8BLoRA0.7070.885.279.991.784.384.271.279.080.8
HypLoRA0.7074.187.680.694.584.790.481.285.284.8
Gemma3-4BLoRA1.0468.183.277.288.980.584.569.983.679.5
HypLoRA1.0470.084.379.291.580.389.175.986.482.5
Qwen2.5-7BLoRA0.7173.489.579.593.684.192.882.087.085.2
HypLoRA0.7172.889.379.894.884.495.587.590.887.0
Citation
@inproceedings{yang2025hyplora,
  title     = {Hyperbolic Fine-Tuning for Large Language Models},
  author    = {Yang, Menglin and B B, Ram Samarth and Feng, Aosong
               and Xiong, Bo and Liu, Jiahong and King, Irwin and Ying, Rex},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year      = {2025}
}

HypLoRA is not just “LoRA in another geometry.” It is a geometry guided adaptation strategy motivated by measurable structure in token embeddings and consistently improving reasoning with practical parameter efficiency.