File size: 74,264 Bytes
da32f0b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 | \documentclass[conference]{IEEEtran}
\IEEEoverridecommandlockouts
% The preceding line is only needed to identify funding in the first footnote. If that is unneeded, please comment it out.
\usepackage{cite}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{xcolor}
\usepackage{multirow}
\usepackage{amsmath}
\usepackage{array}
\usepackage{graphicx}
\usepackage{booktabs}
\usepackage{hyperref}
\usepackage{cite} % IEEE 官方建议
\bibliographystyle{IEEEtran} % 关键:使用 IEEEtran.bst
% 加载子图相关宏包
\usepackage{subcaption}
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\begin{document}
\title{GTR-Mamba: Geometry-to-Tangent Routing for Hyperbolic POI Recommendation
}
% \author{\IEEEauthorblockN{Zhuoxuan Li}
% \IEEEauthorblockA{\textit{College of Computer Science and Technology} \\
% \textit{Tongji University}\\
% Shanghai, China \\
% li\_zhuoxuan@outlook.com}
% \and
% \IEEEauthorblockN{2\textsuperscript{nd} Given Name Surname}
% \IEEEauthorblockA{\textit{dept. name of organization (of Aff.)} \\
% \textit{name of organization (of Aff.)}\\
% City, Country \\
% email address or ORCID}
% \and
% \IEEEauthorblockN{3\textsuperscript{rd} Given Name Surname}
% \IEEEauthorblockA{\textit{dept. name of organization (of Aff.)} \\
% \textit{name of organization (of Aff.)}\\
% City, Country \\
% email address or ORCID}
% \and
% \IEEEauthorblockN{4\textsuperscript{th} Given Name Surname}
% \IEEEauthorblockA{\textit{dept. name of organization (of Aff.)} \\
% \textit{name of organization (of Aff.)}\\
% City, Country \\
% email address or ORCID}
% \and
% \IEEEauthorblockN{5\textsuperscript{th} Given Name Surname}
% \IEEEauthorblockA{\textit{dept. name of organization (of Aff.)} \\
% \textit{name of organization (of Aff.)}\\
% City, Country \\
% email address or ORCID}
% \and
% \IEEEauthorblockN{6\textsuperscript{th} Given Name Surname}
% \IEEEauthorblockA{\textit{dept. name of organization (of Aff.)} \\
% \textit{name of organization (of Aff.)}\\
% City, Country \\
% email address or ORCID}
% }
\author{
\IEEEauthorblockN{Zhuoxuan Li$^{1*}$, Jieyuan Pei$^{2*}$, Tangwei Ye$^{1}$,
Zhongyuan Lai$^{3}$, Zihan Liu$^{1}$, Fengyuan Xu$^{4}$, Qi Zhang$^{1}$, Liang Hu$^{1\dagger}$}
\IEEEauthorblockA{$^1$ College of Computer Science and Technology, Tongji University, Shanghai, China\\
$^2$ College of Information Engineering, Zhejiang University of Technology, Hangzhou, China\\
$^3$ Shanghai Ballsnow Intelligent Technology Co., Ltd., Shanghai, China\\
$^4$ Hunan University, Changsha, China}
\thanks{$*$ Equal contribution.}
\thanks{$\dagger$ Corresponding author: \href{mailto:rainmilk@gmail.com}{rainmilk@gmail.com}.}
\IEEEauthorblockA{li\_zhuoxuan@outlook.com;\; peijieyuan@zjut.edu.cn;\; yetw@tongji.edu.cn;\; zhongyuan.lai@ballsnow.com;\\ \{tongjilzh,xufengyuan126\}@gmail.com;\; zhangqi\_cs@tongji.edu.cn;\; rainmilk@gmail.com}
}
\maketitle
\begin{abstract}
Next Point-of-Interest (POI) recommendation is a critical task in modern Location-Based Social Networks (LBSNs), aiming to model the complex decision-making process of human mobility to provide personalized recommendations for a user's next check-in location. Existing POI recommendation models, predominantly based on Graph Neural Networks and sequential models, have been extensively studied. However, these models face a fundamental limitation: they struggle to simultaneously capture the inherent hierarchical structure of spatial choices and the dynamics and irregular shifts of user-specific temporal contexts. To overcome this limitation, we propose GTR-Mamba, a novel framework for cross-manifold conditioning and routing. GTR-Mamba leverages the distinct advantages of different mathematical spaces for different tasks: it models the static, tree-like preference hierarchies in hyperbolic geometry, while routing the dynamic sequence updates to a novel Mamba layer in the computationally stable and efficient Euclidean tangent space. This process is coordinated by a cross-manifold channel that fuses spatio-temporal information to explicitly steer the State Space Model (SSM), enabling flexible adaptation to contextual changes. Extensive experiments on three real-world datasets demonstrate that GTR-Mamba consistently outperforms state-of-the-art baseline models in next POI recommendation.
\end{abstract}
\begin{IEEEkeywords}
POI Recommendation, Hyperbolic Mamba, Hyperbolic Space
\end{IEEEkeywords}
\section{Introduction}
Point-of-Interest (POI) refers to a location that a user may find attractive or valuable. The proliferation of web-based location services and online social platforms \cite{wang2019sequential}\cite{sanchez2022point}\cite{yang2014modeling} has generated vast amounts of user-generated, geotagged content. This rich spatio-temporal data, such as check-ins and shared locations, makes it possible to predict places a user is likely to visit based on their preferences and contextual signals, giving rise to research in spatio-temporal data management on personalized prediction of a user's next check-in location. This task is inherently challenging as it requires deciphering the complex interplay between users' hierarchical preferences and their dynamic, context-driven behaviors.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{intro.pdf}
\caption{The hierarchical structure of check-in data}
\label{task}
\end{figure}
Existing recommendation systems are often based on sequential methods that personalize the processing of contextual information to better capture user preferences \cite{baral2018caps}\cite{baral2018close}\cite{wang2019spent}. Tree-based methods have also been employed to model the hierarchical relationships between users and POIs \cite{lu2020glr}\cite{baral2018caps}\cite{chen2023dynamic}. Furthermore, given the powerful capability of Graph Neural Networks (GNNs) \cite{xu2023revisiting}\cite{li2021you}\cite{qin2023disenpoi} in integrating geographical information, they have been widely used in the task of next POI recommendation. On the other hand, recent breakthroughs in structured state-space sequence (S4) models have brought about significant efficiency improvements in sequential modelling. The Mamba variant of such S4 models, in particular, has gained much prominence in this respect \cite{qin2025geomamba}\cite{chen2024geomamba}\cite{jiang2026hierarchical}. However, these studies uniformly model user trajectories in Euclidean space, which struggles to effectively capture the inherent hierarchical and tree-like structures embedded in check-in behaviors. POIs are typically organized into hierarchical structures that implicitly contain both semantic and geographical relationships. POIs are often spatially localized and semantically structured within strict category trees. For example, a user at noon might first decide on the general category "Dining," and then subsequently select a sub-category such as "Chinese Food," "Western Food," or "Fast Food." Figure \ref{task} illustrates this implicit hierarchical structure within check-in behavior.
To better capture such implicit hierarchical relationships, some studies have explored embeddings in hyperbolic space. In contrast to the polynomial volume growth of Euclidean space, the volume of hyperbolic space grows exponentially, making it a natural fit for modeling hierarchical data. These studies, which integrate hyperbolic geometry with rotation-based methods or variational graph autoencoders, have shown promising results \cite{liu2025hyperbolic}\cite{qiao2025hyperbolic}\cite{feng2020hme}. Hence, hyperbolic space is naturally suitable for describing the static hierarchical organization inherent in POIs. However, existing models fail to effectively capture the switching between the different spaces, or contexts in real-world scenarios. A user's mobility patterns follow different rules under different circumstances. For instance, a user with a tight schedule on a weekday at noon is more likely to choose a nearby restaurant, whereas after work, with a more relaxed pace, their entertainment activities are more likely to be influenced by social connections.
To resolve this fundamental disconnect between static representation and dynamic shifts, we propose GTR-Mamba, a novel framework for cross-manifold conditioning and routing. We assign the distinct challenges of modeling static hierarchies and dynamic sequences to the mathematical structures best suited for them. For geometric representation, we leverage the exponential capacity of hyperbolic geometry to accommodate the tree-like structure of user preferences. Computationally, the transition dynamics between Euclidean and hyperbolic manifolds is facilitated by a novel Mamba that updates its state in the more computationally efficient Euclidean tangent space, with its internal State Space Model (SSM) being adaptively driven by exogenous spatio-temporal conditions. Specifically, a cross-manifold spatio-temporal fusion channel first encodes geographical context using spherical multi-scale Random Fourier features and Radial Basis Functions, and temporal information using sine-cosine encoding. It then fuses information from different manifolds and sends it to a cross-manifold conditional routing Mamba layer. Concurrently, Euclidean temporal and geographical information explicitly drives the SSM to handle irregular context switches. This separation of static geometry and dynamic updates bypasses the need for complex on-manifold operations (like the Möbius operations required by HMamba \cite{zhang2025hmamba}), endowing the framework with superior numerical stability and computational tractability compared to previous hyperbolic Mamba models.
Our contributions are summarized as follows:
\begin{itemize}
\item We propose a novel Mamba layer with cross-manifold conditioning and routing, leveraging the robust hierarchical representation capabilities of hyperbolic space to capture static preferences, while routing complex dynamic sequence updates to computationally stable and efficient Euclidean tangent spaces for execution.
\item We propose a context-explicit driven variable-step selective SSM, where internal dynamic state transitions adaptively adjust based on external spatiotemporal signals to address complex temporal and contextual shifts.
\item We introduce a cross-manifold spatiotemporal channel that integrates spatiotemporal contexts in Euclidean space with geometric embeddings in hyperbolic space, thereby bridging the informational advantages of distinct manifolds.
\item Extensive experiments were conducted on three real-world LBSN datasets. The results confirm that our proposed GTR-Mamba model demonstrates superior overall performance compared to state-of-the-art baseline methods.
\end{itemize}
%The remainder of this paper is organized as follows: Section \ref{sec:related_work} reviews existing research on POI recommendation and hyperbolic space. Section \ref{sec:p} provides the problem statement and preliminary descriptions. Section \ref{sec:m} introduces our proposed GTR-Mamba model. Section \ref{sec:e} reports the experimental procedures and analysis of results. Finally, Section \ref{sec:c} presents the conclusion.
\section{RELATED WORK}
\label{sec:related_work}
\subsection{Next POI Recommendation}
Next Point-of-Interest (POI) recommendation often relies on modeling the complex transitional and sequential patterns within users' historical check-ins. Leveraging the powerful deep modeling capabilities of deep neural networks, sequential models such as LSTM/RNN have been employed to treat the POI task as a sequence prediction problem \cite{wang2021reinforced}\cite{wu2020personalized}\cite{feng2018deepmove}\cite{liu2016predicting}. Concurrently, variants of the attention mechanism \cite{luo2021stan}\cite{xue2021mobtcast}\cite{zhang2022next}\cite{duan2023clsprec} have been widely adopted due to their ability to focus on more critical parts of historical spatio-temporal information, thereby integrating richer contextual representations. Graph Neural Networks (GNNs) \cite{wang2022learning}\cite{yan2023spatio}\cite{wang2023adaptive}\cite{li2021you}\cite{qin2023disenpoi}\cite{xu2023revisiting} have also achieved significant success by further modeling geographical dependencies through neighborhood aggregation. Notably, some research has already recognized the importance of hierarchical structures for the POI recommendation task, including the introduction of auxiliary information such as POI categories \cite{yu2020category}\cite{zhang2020modeling}\cite{zang2021cha} and geographical regions \cite{lian2020geography}\cite{xie2023hierarchical}\cite{lim2022hierarchical} to enhance recommendation performance. Furthermore, tree-based methods \cite{lu2020glr}\cite{baral2018caps}\cite{chen2023dynamic}\cite{huang2024learning} have also been proposed, as trees inherently possess a hierarchical structure.
Owing to Mamba's formidable long-sequence modeling capabilities, several Mamba-based methods have recently been introduced. For instance, Chen et al. \cite{chen2024geomamba} leverage a combination of hierarchical geographical encoding and Mamba to achieve awareness of geographical sequences, while Qin et al. \cite{qin2025geomamba} utilize Mamba and the GaPPO operator to extend the state space for modeling multi-granularity spatio-temporal transitions. Although these recommendation methods have achieved excellent results, these state-of-the-art models are all based in Euclidean space, where hierarchical structures cannot be well preserved.
\subsection{Hyperbolic Recommendation}
Owing to their structural suitability for capturing complex hierarchical patterns, hyperbolic learning techniques have been introduced into recommendation tasks \cite{sun2021hgcf}. Recent advancements include knowledge-aware recommendation \cite{chen2022modeling}\cite{du2022hakg}, social recommendation \cite{yang2023hyperbolic}\cite{wang2021hypersorec}, session-based recommendation \cite{guo2023hyperbolic}, news recommendation \cite{wang2023hdnr}, and POI recommendation. For example, collaborative filtering techniques have been combined with hyperbolic space \cite{li2022hyperbolic}\cite{yang2022hicf}, outperforming traditional collaborative filtering methods. In the domain of POI recommendation, HME \cite{feng2020hme} is a hyperbolic metric embedding method for next POI recommendation that incorporates users, items, regions, and categories into a single Poincaré ball model. Although existing studies can effectively model complex graph structures, they often focus on learning hyperbolic node representations while neglecting the rich transitional semantics in user mobility behavior. Qiao et al. \cite{qiao2025hyperbolic} proposed HMST, which utilizes hyperbolic rotations to jointly model hierarchical structures and multi-semantic transitions, but its capability for sequence modeling is limited. Liu et al. \cite{liu2025hyperbolic} introduced HVGAE, a novel framework combining hyperbolic graph convolutional networks, variational graph autoencoders, and rotational Mamba; however, its ability to perceive different contexts is relatively limited.
Furthermore, research has already begun to combine hyperbolic space with the Mamba model. The HMamba model \cite{zhang2025hmamba} integrates the linear efficiency of Mamba with hyperbolic space for sequential feature extraction. However, it introduces additional Möbius operations, resulting in higher computational complexity compared to standard Mamba in Euclidean space. In contrast, our hyperbolic GTR-Mamba not only preserves the computational efficiency of Euclidean Mamba through a geometric-to-tangent-space transformation pathway but also proposes a powerful exogenous driving mechanism to adapt to diverse and complex recommendation scenarios.
\section{PRELIMINARY}
\label{sec:p}
\subsection{Basic Definition}
Let $\mathcal{U}$, $\mathcal{P}$, $\mathcal{C}$, and $\mathcal{R}$ be the sets of users, POIs, categories, and regions, respectively, where $|\mathcal{P}|$ is the total number of POIs. Each POI is associated with location information, represented by geographical coordinates, and category information that reflects its function. The regions are constructed by partitioning the entire geographical area based on the collected coordinates, which determines the region to which each POI belongs.
A check-in, denoted as $s = (u, p, t, c, r)$, records the event of a user $u \in \mathcal{U}$ visiting a specific POI $p \in \mathcal{P}$ at a timestamp $t$. Here, $c \in \mathcal{C}$ and $r \in \mathcal{R}$ represent the category and region of POI $p$, respectively. We represent the check-in sequence of a user $u$ as $\mathcal{S}_u = \{s_1, s_2, \dots, s_{l_u}\}$, where $s_i$ is the $i$-th check-in of user $u$, and $l_u$ is the length of the sequence $\mathcal{S}_u$.
Given a user's historical check-in sequence $\mathcal{S}_u$, the objective of next POI recommendation is to predict the POI $p_{l_u+1}$ that the user $u$ is most likely to visit next.
\subsection{Hyperbolic Space}
Let $\mathbb{H}^n_c$ denote an $n$-dimensional hyperbolic space with negative curvature $-1/c < 0$, where $c > 0$. In this paper, we adopt the Lorentz model embedded in $\mathbb{R}^{n+1}$. The $n$-dimensional Lorentz model is defined as a Riemannian manifold with constant negative curvature $-1/c$: $\mathbb{L}_c^n = (\mathcal{H}_c^n, g_x^c),$
where $\mathcal{H}^n_c=\{\,x\in\mathbb{R}^{n+1}:\ \langle x,x\rangle_L=-c,\ x_0>0\,\},\qquad
g_x^c(u,v)=\langle u,v\rangle_L.$
We adopt the convention where the time coordinate is first: $x = (x_0, x_1, \dots, x_n)$, with $x_0$ being the temporal component. The Lorentzian inner product $\langle \cdot, \cdot \rangle_L$ is given by
$\langle x, y \rangle_L = -x_0 y_0 + \sum_{i=1}^{n} x_i y_i.$
The commonly used squared Lorentz distance \cite{law2019lorentzian} is defined as
\begin{equation}
d_L^2(x, y) := -2c - 2 \langle x, y \rangle_L.
\end{equation}
This distance metric captures the hyperbolic geometry and is effective for representing hierarchical relationships.
For any point $x \in \mathcal{H}_c^n$, there exists an $n$-dimensional vector space $T_x \mathbb{H}_c^n$, known as the tangent space at $x$ \cite{peng2021hyperbolic}. The exponential and logarithmic maps are used to transform between the tangent space and the manifold \cite{ganea2018hyperbolic}. Let the origin $o \in \mathcal{H}_c^n$ be defined as $o = (\sqrt{c}, 0, \ldots, 0).$
Let $\|v\|_L = \sqrt{\langle v, v \rangle_L}$ (in the tangent space, this norm is real and non-negative). The exponential and logarithmic maps based at the origin $o$ are then given as follows (uniformly using $c$ for the curvature parameter):
\begin{equation}
\exp_o(v) = \cosh\left(\frac{\|v\|_L}{\sqrt{c}}\right) o + \sqrt{c} \sinh\left(\frac{\|v\|_L}{\sqrt{c}}\right) \frac{v}{\|v\|_L},
\end{equation}
\begin{equation}
\log_o(x) = \frac{\operatorname{arccosh}\left(-\frac{\langle o, x \rangle_L}{c}\right)}{\left\|x + \frac{\langle o, x \rangle_L}{c} o\right\|_L} \left(x + \frac{\langle o, x \rangle_L}{c} o\right).
\end{equation}
The tangent space routing employed in the GTR-Mamba framework utilizes the local Euclidean tangent space approximation within hyperbolic geometry to perform efficient updates while preserving the hierarchical structure inherent to hyperbolic embeddings. Prior research \cite{chami2019hyperbolic}\cite{ganea2018hyperbolic}\cite{chen2021fully} has shown that switching hierarchical representations between hyperbolic space and tangent space via exponential and logarithmic maps can maintain representational properties with minimal distortion under localized operations, providing a robust foundation for the effectiveness of our geometry-to-tangent routing mechanism in GTR-Mamba.
To achieve additive compositions in non-Euclidean space, we employ gyrovector (Möbius) algebra. In the equivalent Poincaré ball model,
$\mathbb{D}_c^n = \{x \in \mathbb{R}^n: c\|x\|^2 < 1\},$ where $\langle x, y \rangle $ is the Euclidean inner product and $\|x\| = \sqrt{\langle x, x \rangle}$, the Möbius addition is defined as:
\begin{equation}
x \oplus_c y = \frac{(1 + 2c\langle x, y \rangle + c\|y\|^2)x + (1 - c\|x\|^2)y}{1 + 2c\langle x, y \rangle + c^2\|x\|^2\|y\|^2},
\end{equation}
When residual connections \cite{he2016deep} are required in the Lorentz model, we use the equivalent Möbius addition. This operation is utilized for both semantic composition and inter-layer residuals in this paper.
In the subsequent derivations and implementation in this paper, we set the curvature parameter $c=1$. Following prior work \cite{liu2025hyperbolic}\cite{chami2019hyperbolic}, we consistently use $o = (1, 0, \ldots, 0)$ as the reference point and employ $\exp_o(\cdot)$ and $\log_o(\cdot)$ for transformations between the manifold $\mathbb{H}^n$ and the tangent space $T_o\mathbb{H}^n$.
\subsection{State Space Models}
State Space Model (SSM) is a mathematical framework for describing a dynamical system \cite{hamilton1994state}. It characterizes the system's dynamic behavior through state variables and is fundamentally composed of a state equation and an observation equation.
For a continuous-time system, the state equation is given by:
\begin{equation}
\dot{x}(t) = Ax(t) + Bu(t),
\end{equation}
where $x(t)$ is the state vector, $\dot{x}(t)$ represents the rate of change of the state, $A$ is the state matrix describing the dynamics between states, and $B$ is the input matrix reflecting the influence of the input $u(t)$ on the state. The observation equation is:
\begin{equation}
y(t) = Cx(t) + Du(t),
\end{equation}
where $y(t)$ is the output vector, $C$ is the output matrix that maps the state to the output, and $D$ is the feedthrough (or direct transmission) matrix describing the direct effect of the input on the output. Typically, $D$ is set to $0$.
In practical applications, the continuous-time model must be discretized for digital processing. Assuming the input is held constant over a sampling period $T_s$ (a zero-order hold), the discretized state equation is:
\begin{equation}
x[k+1] = A_d x[k] + B_d u[k],
\end{equation}
and the discretized observation equation is:
\begin{equation}
y[k] = C_d x[k] + D_d u[k],
\end{equation}
The discretization process is based on the solution to the continuous state equation:
\begin{equation}
x(t) = e^{A(t-t_0)} x(t_0) + \int_{t_0}^{t} e^{A(t-\tau)} B u(\tau) d\tau.
\end{equation}
At the sampling instant $t = kT_s$, we can derive the discrete-time matrices. The state transition matrix is $A_d = e^{AT_s}$, and the input matrix is $B_d = \left( \int_0^{T_s} e^{A\tau} d\tau \right) B$. If the matrix $A$ is invertible, $B_d$ can be simplified to $B_d = A^{-1} (e^{AT_s} - I) B$. The observation matrix and the feedthrough matrix typically remain unchanged, i.e., $C_d = C$ and $D_d = D$.
\section{THE PROPOSED MODEL}
\label{sec:m}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{pipeline.pdf}
\caption{The overall framework of our proposed GTR-Mamba}
\label{GTR}
\end{figure*}
\subsection{Overview}
As illustrated in the Figure \ref{GTR}, this is the overall framework of our proposed GTR-Mamba model, which comprises four main components. First, we obtain hyperbolic embeddings for users, POIs, categories, and regions by leveraging the interaction relationships between users and POIs, as well as among POIs themselves. Second, we employ a cross-manifold spatio-temporal fusion channel. At the Euclidean level, this channel encodes the geographical context using spherical multi-scale Random Fourier Features (RFF) and Radial Basis Functions (RBF), while temporal information is encoded using multi-frequency sine-cosine functions. Subsequently, we utilize multi-head attention to fuse the hyperbolic representations with the Euclidean representations. The resulting fused trajectory representations, now imbued with corresponding semantics, are fed into a cross-manifold conditioning and routing Mamba layer. Concurrently, the Euclidean information is used to explicitly drive the State Space Model (SSM). Finally, we perform prediction by scoring from two pathways: one from the hyperbolic space and the other from the tangent space.
\subsection{Initialize Embeddings}
\label{emb}
Inspired by prior research \cite{qiao2025hyperbolic}, we first pre-train representations for users, POIs, categories, and regions on the Lorentz manifold. Specifically, we begin by randomly initializing a learnable hyperbolic embedding for each entity on the manifold, sampled from a Lorentz Normal distribution, i.e., $x \sim \text{LorentzNormal}$. Subsequently, we construct edges to represent the relationships between entities observed in the historical check-in records: \textbf{User-POI Edges ($e_{u, p} := (u, p)$)}: If a user $u$ has visited a POI $p$, a user-POI edge $(u, p)$ is created to represent the interaction between them. This interaction reflects the user's preferences. \textbf{POI-POI Edges ($e_{p_1, p_2} := (p_1, p_2)$)}: A clear sequential pattern emerges between two locations, $p_1$ and $p_2$, that a user visits within a six-hour window \cite{feng2020hme}. In this manner, we can extract all one-hop transition relationships. \textbf{Category and Region Edges}: Based on the transitional relationships between POIs, we can derive the complete sets of category-category and region-region transitional relationships according to the categories and regions the POIs belong to.
For relationship modeling, to align embeddings across different semantics, we introduce an isometric rotation operation. The rotation matrix $R$ is defined by learnable parameters (cosine-sine pairs). For a 2D example:
\begin{equation}
Rot = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix},
\end{equation}
which is extended to a block-diagonal form in higher dimensions. This operation corresponds to an isometry that preserves the Lorentzian inner product. For a source embedding $\text{frs}$, the rotation is applied only to the spatial components: $\text{frs}' = \text{proj}{\mathcal{H}}(Rot(\text{frs}))$, where $\text{proj}{\mathcal{H}}$ denotes the projection back to the hyperboloid. Our objective is for the source entity to be rotated into a direction that better aligns with its target. Specifically, we aim to bring users closer to the POIs they have interacted with and, similarly, to reduce the distance between pairs of POIs, categories, and regions that exhibit transitional relationships in the manifold. Therefore, we employ the following similarity score for unsupervised learning across all edge types $t \in \{{up, pp, cc, rr}\}$ (representing user-POI, POI-POI, category-category, and region-region edges, respectively):
\begin{equation*}
s_t(x, y) = -\max(d_L^2(x, y), 0),
\end{equation*}
where $x$ and $y$ are the embeddings of the source and target entities for edge type $t$. A higher score indicates that the points are closer in hyperbolic space. We adopt a negative sampling technique, maximizing the scores of positive samples while minimizing those of negative samples through contrastive learning. For an observed edge $(\text{frs}_t, \text{tos}_t)$ of type $t$, the positive sample score is:
\begin{equation}
\text{s}_{\text{pos}, t} = s_t(\text{frs}_t', \text{tos}_t) + b_{\text{frs}_t} + b_{\text{tos}_t},
\end{equation}
where $\text{frs}_t'$ is the rotated source embedding, and $b_{\text{frs}_t}$ and $b_{\text{tos}_t}$ are learnable biases capturing inherent preferences for the source and target entities of edge type $t$. The negative sample score is calculated similarly:
\begin{equation}
\text{s}_{\text{neg}, t} = s_t(\text{frs}_t', \text{neg}_t) + b_{\text{frs}_t} + b_{\text{neg}_t},
\end{equation}
where $\text{neg}_t$ is a negative sample for edge type $t$.
The edge loss, incorporating all edge types $t \in \{{up, pp, cc, rr}\}$, is defined as:
\begin{equation}
\mathcal{L} = -\sum_{t \in \{up, pp, cc, rr\}} \left( \sum \log \sigma(\text{s}_{\text{pos}, t}) + \sum \log \sigma(-\text{s}_{\text{neg}, t}) \right),
\end{equation}
where $\sigma(\cdot)$ is the sigmoid function. This formulation enables multi-semantic learning across user-POI, POI-POI, category-category, and region-region relationships.
After obtaining the hyperbolic embeddings $\text{E}_p, \text{E}_c, \text{E}_u, \text{E}_r$ for each entity, we fuse the multi-modal representations for semantic composition in the tangent space. First, we project the embeddings of each entity in a trajectory onto the tangent space at the origin using the logarithmic map. We then compute a semantic vector by taking a weighted combination of the tangent vectors:
\begin{equation}
\text{v}_{\text{s}} = \alpha_u \log_o(\text{E}_u) + \alpha_p \log_o(\text{E}_p) + \alpha_c \log_o(\text{E}_c) + \alpha_r\log_o(\text{E}_r),
\end{equation}
where $\text{v}_{\text{s}} \in \mathbb{R}^{L \times d}$ for a sequence of length $L$, $\alpha_u, \alpha_p, \alpha_c$, and $\alpha_r$ are hyperparameters. We set these weights to strategically prioritize objectives during semantic fusion. Finally, this composite tangent vector is mapped back to the manifold using the exponential map.
\subsection{Spatio-temporal Channel}
Although we have obtained hyperbolic representations for the entities, information from the geographical and temporal domains is still missing. Directly incorporating Euclidean spatio-temporal information into a negatively curved space would introduce unnecessary geometric bias and corrupt its inherent linear properties. Therefore, it is necessary to encode spatio-temporal information from a Euclidean perspective to serve as an exogenous linear driver for the State Space Model (SSM).
\subsubsection{Geographic Embedding Module}
Our geographical embedding module processes latitude and longitude inputs by combining Random Fourier Features (RFF) and Radial Basis Functions (RBF) to generate a high-dimensional representation of geographical features. This dual-kernel hybrid approach leverages the global smoothness of RFF and the local sensitivity of RBF to adapt to the complex spatial patterns found in trajectory data.
First, we map the latitude and longitude coordinates to unit vectors on a sphere. By sampling a multi-scale Gaussian frequency matrix, we construct multi-scale harmonic feature vectors, which are then projected to obtain global multi-scale geographical features, denoted as $\text{rff}_{\text{proj}}$. Subsequently, we place a set of anchor points on the sphere and compute a Gaussian kernel based on the arc length distance to these anchors. The Top-K responses are selected and projected to yield local prototype features, denoted as $\text{rbf}_{\text{proj}}$.
Finally, we fuse the features from these two kernel-based encodings. We first define the weight matrix $w \in \mathbb{R}^{L \times 2}$ for a sequence of length $L$, where the two columns correspond to the weights $w_1$ and $w_2$, i.e., $w = [w_1, w_2]$, with $w_i \in \mathbb{R}^L$ for $i \in {1, 2}$. These weights are dynamically computed as:
\begin{equation}
w = \text{softmax}(\text{Linear}(\text{Concat}[\text{rff}_{\text{proj}}; \text{rbf}_{\text{proj}}])),
\end{equation}
where the linear layer outputs a 2-dimensional vector, which is then normalized via a softmax function to produce $w_1$ and $w_2$. The fusion is performed as:
\begin{equation}
\text{E}_{g} = \text{Linear}(\text{rff}_{\text{proj}} \cdot w_1 + \text{rbf}_{\text{proj}} \cdot w_2),
\end{equation}
where $\text{E}{g} \in \mathbb{R}^{L \times d{\text{geo}}}$ represents the final features projected to a dimension of $d_{\text{geo}}$. This fusion mechanism dynamically balances global and local features, allowing for adaptation to different geographical contexts.
\subsubsection{Temporal Feature Module}
The temporal feature module is responsible for generating temporal representations and a decay factor, which are used to modulate the dynamic evolution of the SSM.
We first extract the time interval $\Delta t \in \mathbb{R}^{L \times 1}$, the day of the week ($\text{dow}$), and the hour of the day ($\text{hour}$) from the time series. These are then fused into a feature vector:
\begin{equation}
\mathbf{E}_t = \text{Concat}[\Delta t; \sin(\omega \Delta t); \cos(\omega \Delta t); \text{OH}(\text{dow}); \text{OH}(\text{hour})]
\end{equation}
where $\omega \in \mathbb{R}^{M}$ is a vector of logarithmically spaced frequencies ($M$ being the number of frequencies), $\text{E}_{t} \in \mathbb{R}^{L \times d_{\text{time}}}$, and $\text{OH}(\text{dow})$ and $\text{OH}(\text{hour})$ are 7-dimensional and 24-dimensional one-hot encodings, respectively. This design captures both the periodicity and long-term trends of temporal data.
The feature vector is then projected to a dimension of $d_{\text{time}}$, and a decay factor is computed via a gated mechanism:
\begin{equation}
\gamma_t = \text{sigmoid}(\text{E}_t \cdot \text{w}_{\text{gate}}),
\end{equation}
where $\gamma_t \in \mathbb{R}^{L \times 1}$ and $\text{w}_{\text{gate}} \in \mathbb{R}^{d_{\text{time}}}$ is a learnable weight vector. This decay factor $\gamma_t$ modulates the step size of the SSM, simulating the influence of time intervals on the trajectory dynamics.
\subsubsection{Cross-Manifold Attention}
Recalling that we have obtained the hyperbolic semantic vector $\text{v}_{\text{s}}$ and the Euclidean context vector $\text{u}_c = \text{Concat}[\text{E}_{g}; \text{E}_t] \in \mathbb{R}^{L \times (d_{\text{geo}} + d_{\text{time}})}$, we require a cross-manifold fusion method to generate an enhanced trajectory representation. Given the outstanding performance of multi-head attention in previous research \cite{vaswani2017attention}, we employ a hyperbolic cross-manifold attention mechanism here.
The attention scores are first computed in the tangent space:
\begin{equation}
\text{score}_{att} = \frac{\text{q} \cdot \text{k}^T}{\sqrt{d_{\text{head}}}} \in \mathbb{R}^{L \times L},
\end{equation}
where the query, key, and value are defined as: $\text{q} = \text{Linear}(\text{v}_{\text{s}}), \quad \text{k} = \text{Linear}(\text{u}_c), \quad \text{v} = \text{Linear}(\text{u}_c).$
Here, $d_{\text{head}}$ is the dimension of each attention head. The attention output is then computed in Euclidean space:
\begin{equation}
\text{out}_{att} = \text{softmax}(\text{score}_{att}) \cdot \text{v}.
\end{equation}
The resulting vector $\text{out}_{\text{att}}$ is then re-projected back to the manifold and fused with the original semantic vector $\text{v}_{\text{s}}$ via Möbius addition to produce the final enhanced representation $\text{q}_t \in \mathbb{H}^{L \times d}$:
\begin{equation}
q_t = \text{v}_{\text{s}} \oplus_c \exp_o(\text{Linear}(\text{out}_{att})).
\end{equation}
\subsection{GTR-Mamba Layer}
To address the modeling challenges of sequence-encoded tasks, we adopt the Mamba framework \cite{gu2024mamba}. Its selective scanning mechanism, enabled by dynamic step sizes and input gating, adaptively captures variations in temporal intervals and external spatiotemporal contexts, which is critical for handling the heterogeneity of trajectory data. Furthermore, Mamba’s linear recursive computation facilitates efficient dynamic updates in the tangent space, enhancing its suitability for such tasks.
For enhanced stability and computational efficiency, we employ a fixed diagonal matrix:
\begin{equation}
\text{A} = -\text{diag}(\log(1), \log(2), \dots, \log(d)).
\end{equation}
This structure also naturally accommodates the multi-time-scale characteristics of trajectory data, ranging from short-term frequent check-ins to long-term behavioral patterns. To compensate for the limitation of a diagonal $\text{A}$, which lacks cross-channel coupling, we implement dynamic channel modulation on the input side through context-driven selective gating.
The step size $\Delta t$ is dynamically generated based on the Euclidean features $\text{u}_c$:
\begin{equation}
\Delta t = (\text{A}_{\text{proj}}(\text{u}_c) \cdot \text{dt}_{\text{weight}} + \text{dt}_{\text{bias}}) \cdot \gamma_t,
\end{equation}
where $\text{A}_{\text{proj}}: \mathbb{R}^d \to \mathbb{R}^d.$ Here, $\text{dt}_{\text{weight}}$ is a learnable vector, $\text{dt}_{\text{bias}}$ is a learnable bias, and $\gamma_t$ is the decay factor obtained from the temporal encoding module. This amplifies the input during periods of high contextual relevance (i.e., short-interval scenarios), thereby preserving more detail.
Subsequently, we discretize the continuous SSM. The state transition matrix $\bar{\text{A}} \in \mathbb{R}^{L \times d}$ is:
\begin{equation}
\bar{\text{A}} = \exp(\Delta t \cdot \text{A}),
\end{equation}
And for the input matrix $\bar{\text{B}}$:
\begin{equation}
\bar{\text{B}} = (\exp(\Delta t \text{A}) - \text{I}) \text{A}^{-1}.
\end{equation}
where $ \bar{\text{B}} \in \mathbb{R}^{L\times d}.$ To ensure numerical stability during discretization, a Taylor expansion approximation is employed for diagonal elements of $A$ approaching zero (e.g. $log(1)$), thereby circumventing division-by-zero errors in the computation of $\bar{\text{B}}$.
To inject the exogenous contextual conditions, the input matrix $\bar{\text{B}}$ is modulated by selective weights and the Euclidean context features:
\begin{equation}
\bar{\text{B}} \leftarrow \bar{\text{B}} \odot \text{B}_{\text{proj}}(\text{u}_c) \odot \sigma(\text{C}_{\text{proj}}(\text{u}_c)),
\end{equation}
where $\text{B}_{\text{proj}}, \text{C}_{\text{proj}}: \mathbb{R}^d \to \mathbb{R}^d$. This element-wise multiplication allows each state dimension to independently determine its input strength based on the spatio-temporal context, with the selective weights adaptively adjusting the focus on input channels based on current conditions.
We then perform the state update additively in the tangent space. This approach offers two key advantages. First, it allows us to circumvent the complexity and instability of the Möbius multiplication required for updates on the manifold, as seen in previous hyperbolic Mamba research. Second, the continuous nature of trajectory data is highly compatible with the incremental updates in the Euclidean tangent space, making the state evolution more aligned with the dynamic regularities of trajectories.
We project the current time step's input $\text{q}_t$ (in its Lorentz representation) into the tangent space, update the state, and then map it back to the manifold via the exponential map, adding a learnable bias anchor:
\begin{equation}
\text{h}_t = \bar{\text{A}}_t \odot \text{h}_{t-1} + \bar{\text{B}}_t \odot \log_o(\text{q}_t),
\end{equation}
\begin{equation}
\text{H}_t = \exp_o(\text{h}_t) \oplus_c \text{bias},
\end{equation}
where $\text{bias}$ is a learnable offset and $ \text{h}_t \in \mathbb{R}^{d}$. The update is performed iteratively through the sequence, with the initial state in the tangent space being $\text{h}_0 = \text{0}$.
After the SSM output $\text{H}_t$ is generated, the final trajectory embedding is obtained through a Lorentz linear projection \cite{chen2021fully} and a residual connection \cite{he2016deep}:
\begin{equation}
\text{E}_{\mathrm{traj}}^{(t)}= \text{H}_{t-1} \oplus_c \text{LorentzLinear}(\text{H}_t),
\end{equation}
where $ \text{E}_{\mathrm{traj}}^{(t)} \in \mathbb{H}^{d}.$ By stacking the per-step embeddings over $t=1,\ldots,L$, we obtain $\text{E}_{\mathrm{traj}}=[\,\text{E}_{\mathrm{traj}}^{(1)},\ldots,\text{E}_{\mathrm{traj}}^{(L)}\,],$ where $\text{E}_{\mathrm{traj}}\in\mathbb{H}^{L\times d}$
Figure \ref{mamba} details how our SSM performs spatial transformations and state updates between the hyperbolic manifold and the tangent space. It is crucial to highlight that the process of re-mapping the hidden state ($\text{h}_t$) back to the manifold ($\text{H}_t$) at each time step also functions as a stabilizing projection. If, conversely, the state updates were performed entirely within the tangent space before a single, final projection back to the manifold, the final result would suffer from significant numerical deviation and distortion.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{mamba.pdf}
\caption{Detailed architecture of the GTR-SSM}
\label{mamba}
\end{figure}
\subsection{Prediction and Loss}
Upon completing the sequential information update, we proceed to predict the next location the user is likely to visit. Our method also forecasts potential transitions between categories and regions to aid in the next location prediction. In other words, we integrate the results from these multiple tasks to formulate the final recommendation. Here, we use POI prediction as an illustrative example; the prediction process for the other tasks is analogous. The total prediction loss is the sum of these individual losses:
\begin{equation}
\mathcal{L}_{\text{all}} = \mathcal{L}_{\text{poi}} + \mathcal{L}_{\text{cat}} + \mathcal{L}_{\text{reg}}
\end{equation}
Our prediction component performs scoring from both the hyperbolic and tangent spaces.
To capture the geometric relationships between embeddings, we compute the squared Lorentz distance, $d_L^2(\cdot, \cdot)$, on the manifold between the GTR-Mamba output trajectory embedding, $\text{E}_{traj}$, and all candidate entities (POIs, categories, or regions). The similarity score is defined as:
\begin{equation}
\text{s}_{\text{hyperbolic}} = -\frac{\sqrt{d_L^2(\text{E}_{traj}, \text{p})}}{\tau},
\end{equation}
where $\tau$ is a learnable temperature parameter. For POI prediction, $\text{p}$ represents the embeddings of all POIs $\text{E}_{\mathcal{P}} \in \mathbb{R}^{|\mathcal{P}| \times d}$. This distance-based score leverages the geometric properties of the Lorentz manifold by directly comparing embeddings in the hyperbolic space, which is well-suited for hierarchical data.
Concurrently, the tangent vector of the trajectory is decoded through a linear layer to produce logit scores for the candidate entities:
\begin{equation}
\text{s}_{\text{tangent}} = \text{Linear}(\\log_o(\text{E}_{traj})),
\end{equation}
This provides a direct classification prediction that captures the linear patterns within the tangent space.
To balance the geometric scores and the linear predictions, we introduce a learnable mixing parameter $\alpha$. The final prediction score is a weighted combination:
\begin{equation}
\text s = \alpha \cdot \text{s}_{\text{tangent}} + (1 - \alpha) \cdot \text{s}_{\text{hyperbolic}},
\end{equation}
This formulation combines the hierarchical expressive power of hyperbolic distance with the flexibility of a linear decoder, adaptively adjusting the weights of the two components via the parameter $\alpha$.
For the POI, category, and region prediction tasks, we employ the cross-entropy loss, yielding the respective losses $\mathcal{L}{\text{poi}}$, $\mathcal{L}{\text{cat}}$, and $\mathcal{L}_{\text{reg}}$. Specifically, for the POI prediction task, the loss is defined as:
$$\mathcal{L}_{\text{poi}} = -\frac{1}{N} \sum_{i=1}^N \sum_{k=1}^{K_{\text{poi}}} y_{i,k}^{\text{poi}} \log(\hat{y}_{i,k}^{\text{poi}}),$$
where $N$ is the number of samples, $K_{\text{poi}}$ is the number of POI classes, $y_{i,k}^{\text{poi}} \in \{0, 1\}$ is the ground truth label indicating whether the $i$-th sample belongs to the $k$-th POI class, and $\hat{y}_{i,k}^{\text{poi}}$ is the predicted probability for the $k$-th POI class for the $i$-th sample. Similarly, for the category and region prediction tasks, the losses $\mathcal{L}_{\text{cat}}$ and $\mathcal{L}_{\text{reg}}$ are defined in the same way.
\section{EXPERIMENT AND RESULT ANALYSIS}
\label{sec:e}
%In this section, we present the empirical results to facilitate a fair quantitative comparison with other models. We provide a summary table of the datasets, a performance comparison based on top-k NDCG and MRR evaluation metrics, the results of our ablation study, a sensitivity analysis of the model's parameters, and finally, an analysis of our model's utilization of the hyperbolic structure and its performance in specific task scenarios.
\begin{table}[h]
\centering
\caption{Data statistics for different datasets}
\label{tab:data_summary}
\renewcommand{\arraystretch}{1.5} % Increase row spacing by scaling factor (1.5x)
\begin{tabular}{lcccccc}
\toprule
& User & POI & Category & Trajectory & Check-in & Density \\
\midrule
NYC & 1,047 & 4,980 & 318 & 13,955 & 101,760 & 0.016 \\
TKY & 2,281 & 7,832 & 290 & 65,914 & 403,148 & 0.018 \\
CA & 3,956 & 9,689 & 296 & 42,982 & 221,717 & 0.005 \\
\bottomrule
\end{tabular}%
\end{table}
\subsection{Dataset}
We evaluate our proposed model on three datasets collected from two real-world check-in platforms: Foursquare \cite{yang2014modeling} and Gowalla \cite{yuan2013time}. The Foursquare dataset includes two subsets, which are collected from New York City (NYC) in USA and Tokyo (TKY) in Japan. The Gowalla dataset includes one subset collected from California and Nevada (CA). Their detailed statistics are in Table \ref{tab:data_summary}. The density is calculated as the total number of visits divided by (number of users × number of POIs), which is used to reflect the sparsity level between users and POIs.
Following prior work \cite{sun2020go}, we remove POIs with fewer than five check-ins, segment each user’s trajectory into sequences of length 3–101, and split the data into training, validation, and test sets in an 8:1:1 ratio.
\subsection{Experiment Setting}
In our experiments, the curvature parameter, c, of the hyperbolic space was set to 1. We configured the model with the following default hyperparameters: a batch size of 128, a learning rate of 0.001, and a training duration of 50 epochs. Weights $\alpha_u, \alpha_p, \alpha_c $ and $ \alpha_r$ were set to 0.5, 0.3, 0.1 and 0.1 respectively. The entire geographical area was partitioned into 40 regions. The embedding dimension, n, was set to 64, with the geographical and temporal encoding dimensions set to 16 and 24, respectively. For the pre-training phase, we used 5 negative samples per positive instance. The multi-head attention mechanism was configured with 4 heads, and the Mamba architecture consisted of 2 layers.
Due to the varying geographical distributions of the datasets, the number of anchor points for the geographical encoding was adjusted accordingly: 50 for both the NYC and TKY datasets, and 400 for the CA dataset.
We evaluated the recommendation performance using Top-k Normalized Discounted Cumulative Gain (NDCG@k) and Mean Reciprocal Rank (MRR). The value of k was set to 1, 5, and 10. Our method was implemented using PyTorch and executed on an NVIDIA GeForce RTX 4090 GPU.
\subsection{Baseline Model}
We compare our model with the following baselines:
\begin{itemize}
\item LSTM \cite{hochreiter1997long}: LSTM is a classic neural network method that models sequential data by capturing long-term dependencies through memory cells
\item PLSPL \cite{wu2020personalized}: PLSPL is a method that learns personalized long- and short-term preferences, weighted by a user-specific unit, and uses attention to integrate contextual features like category and check-in time.
\item HME \cite{feng2020hme}: HME is a state-of-the-art method that projects check-in data into hyperbolic space to capture hierarchical structures, sequential transitions, and user preferences for next-POI recommendation.
\item GETNext \cite{yang2022getnext}: GETNext is a state-of-the-art method that enhances transformer models with a user-agnostic global trajectory flow map to incorporate collaborative signals and address cold-start issues in next POI recommendation.
\item AGRAN \cite{wang2023adaptive}: AGRAN is a state-of-the-art method that uses adaptive graph structure learning to capture geographical dependencies and integrates them with spatio-temporal self-attention for next POI recommendation.
\item MCLP \cite{sun2024going}: MCLP is a state-of-the-art method that predicts next locations by modeling user preferences via a topic model from historical trajectories and estimating arrival times with multi-head attention.
\item $\text{GeoMamba}_{2024}$ \cite{chen2024geomamba}: GeoMamba is a state-of-the-art method that leverages Mamba's linear complexity with a hierarchical geography encoder for efficient, geography-aware sequential POI recommendation.
\item $\text{GeoMamba}_{2025}$ \cite{qin2025geomamba}: GeoMamba is a state-of-the-art method that extends SSMs with a GaPPO operator to model multi-granular spatio-temporal state transitions for enhanced POI recommendation.
\item HMamba \cite{zhang2025hmamba}: HMamba is a state-of-the-art method that combines Mamba's linear-time efficiency with hyperbolic geometry. The full version utilizes complete curvature-aware state spaces and stabilized Riemannian operations, while the half version employs a simplified implementation to reduce computational overhead.
\item HMST \cite{qiao2025hyperbolic}: HMST is a state-of-the-art method that uses hyperbolic rotations to jointly model hierarchical structures and multi-semantic transitions (e.g., location, category, region) for next POI recommendation.
\item HVGAE \cite{liu2025hyperbolic}: HVGAE is a state-of-the-art method that employs a Hyperbolic GCN and Variational Graph Auto-Encoder with Rotary Position Mamba to model hierarchical POI relationships and sequential information for next POI recommendation.
\end{itemize}
The models LSTM, PLSPL, AGRAN, GETNext, MCLP, $\text{GeoMamba}_{2024}$, and $\text{GeoMamba}_{2025}$ are Euclidean-based methods, whereas HME, HVGAE, HMST, and HMamba are hyperbolic-based methods. To distinguish between the two identically named GeoMamba models, we append their year of publication. The two variants of HMamba are differentiated by the subscripts "full" and "half," respectively. Furthermore, as HMamba was originally designed for sequential recommendation, we have adapted it for our POI recommendation task to ensure a fairer comparison by incorporating encodings for temporal granularity and geographical coordinates to integrate spatio-temporal information.
\begin{table*}[t]
\centering
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.25}
\small
\begin{tabular}{l *{12}{c}}
\toprule
\multirow{2.5}{*}{Method} &
\multicolumn{4}{c}{NYC} &
\multicolumn{4}{c}{TKY} &
\multicolumn{4}{c}{CA} \\
\cmidrule(lr){2-5}\cmidrule(lr){6-9}\cmidrule(lr){10-13}
& ND@1 & ND@5 & ND@10 & MRR
& ND@1 & ND@5 & ND@10 & MRR
& ND@1 & ND@5 & ND@10 & MRR \\
\midrule
LSTM & 0.1306 & 0.2336 & 0.2585 & 0.2259
& 0.1110 & 0.2233 & 0.2496 & 0.1952
& 0.0864 & 0.1459 & 0.1711 & 0.1554 \\
PLSPL & 0.1601 & 0.3048 & 0.3336 & 0.2849
& 0.1495 & 0.2831 & 0.3143 & 0.2642
& 0.1084 & 0.1759 & 0.2029 & 0.1678 \\
HME & 0.1619 & 0.2806 & 0.3226 & 0.2787
& 0.1535 & 0.2637 & 0.2924 & 0.2366
& 0.1181 & 0.1886 & 0.2232 & 0.1945 \\
GETNext & 0.2244 & 0.3736 & 0.4046 & 0.3472
& 0.1767 & 0.3072 & 0.3297 & 0.2934
& 0.1342 & 0.2188 & 0.2468 & 0.2121 \\
AGRAN & 0.2202 & 0.3638 & 0.3792 & 0.3343
& 0.1755 & 0.2989 & 0.3261 & 0.2879
& 0.1329 & 0.2121 & 0.2331 & 0.2043 \\
MCLP & \underline{0.2404} & 0.3674 & 0.3973 & 0.3507
& 0.1662 & 0.3110 & 0.3415 & 0.3199
& 0.1324 & 0.1914 & 0.2121 & 0.1895 \\
$\text{GeoMamba}_{2024}$ & 0.1988 & 0.3392 & 0.3506 & 0.3246
& 0.1851 & 0.2953 & 0.3205 & 0.2858
& 0.12556 & 0.2029 & 0.2215 & 0.1962 \\
$\text{GeoMamba}_{2025}$ & 0.2377 & \underline{0.3786} & 0.4012 & \underline{0.3566}
& \underline{0.2157} & \underline{0.3402} & 0.3686 & 0.3209
& 0.1388 & 0.2485 & 0.2754 & 0.2373 \\
$\text{HMamba}_\text{full}$ & 0.2204 & 0.3679 & 0.4031 & 0.3465
& 0.1828 & 0.3341 & 0.3673 & 0.3127
& 0.1366 & \underline{0.2501} & \underline{0.2792} & \underline{0.2421} \\
$\text{HMamba}_\text{half}$ & 0.1896 & 0.3453 & 0.3767 & 0.3222
& 0.1945 & 0.3295 & 0.3603 & 0.3118
& \underline{0.1423} & 0.2381 & 0.2648 & 0.2317 \\
HMST & 0.2138 & 0.3747 & \underline{0.4063} & 0.3482
& 0.1925 & 0.3325 & \underline{0.3690} & \underline{0.3257}
& 0.1356 & 0.2325 & 0.2680 & 0.2300 \\
HVGAE & 0.2271 & 0.3651 & 0.3982 & 0.3470
& 0.1977 & 0.3167 & 0.3455 & 0.3180
& 0.1391 & 0.2325 & 0.2658 & 0.2367 \\
\midrule
GTR-Mamba & \textbf{0.2569} & \textbf{0.3982} & \textbf{0.4287} & \textbf{0.3766}
& \textbf{0.2494} & \textbf{0.3765} & \textbf{0.4058} & \textbf{0.3599}
& \textbf{0.1594} & \textbf{0.2576} & \textbf{0.2868} & \textbf{0.2495} \\
improv. & +6.86\% & +5.18\% & +5.51\% & +5.61\%
& +15.62\% & +10.67\% & +9.97\% & +10.50\%
& +12.02\% & +3.00\% & +2.72\% & +3.06\% \\
\bottomrule
\end{tabular}
\caption{Performance metrics for different models}
\label{tab:performance_metrics}
\end{table*}
\subsection{Performance Comparison With Baselines}
We first compare our model with 12 baseline models to evaluate its capability in next POI recommendation. The overall results are presented in Table \ref{tab:performance_metrics}. As shown, our model consistently outperforms all baseline models on the Foursquare datasets for New York City (NYC) and Tokyo (TKY), as well as on the Gowalla dataset for California and Nevada (CA). The performance improvement ranges from 2.72\% to 15.62\% in Normalized Discounted Cumulative Gain (NDCG) and from 3.06\% to 10.50\% in Mean Reciprocal Rank (MRR), highlighting its robustness in capturing spatio-temporal patterns.
Among the baseline methods, Transformer-based approaches like GETNext and MCLP outperform traditional models such as LSTM and PLSPL. This is attributed to their multi-head attention mechanism, which effectively captures sequential patterns and multi-context information, thereby enhancing the modeling of complex sequential dependencies. AGRAN's adaptive graph structure proves more effective at capturing geographical dependencies than traditional GCNs, also achieving strong results. The Mamba-based method, GeoMamba, maintains a degree of accuracy while ensuring computational efficiency, benefiting from its geographical encoding module and the sequential updating capability of the Mamba model. However, the Euclidean space in which these models operate constrains their potential for higher accuracy. In contrast, our model, operating in hyperbolic space, can more comprehensively capture underlying hierarchical patterns and leverages the Mamba model for efficient and stable updates.
Secondly, our method also demonstrates a significant advantage over other hyperbolic space-based approaches. The foundational HME model focuses solely on modeling node embeddings in hyperbolic space, with limited capacity for learning relational and sequential regularities. HVGAE combines hyperbolic graph convolutional networks and variational graph autoencoders to model deep POI relationships, achieving some improvement over traditional methods. HMST shows advantages in describing multi-semantic transitions and capturing dynamic relationships, but both methods lack a powerful mechanism for updating sequential information. Furthermore, the performance of HMamba is quite prominent, benefiting from the combination of Mamba's update pathway with hyperbolic space. However, it performs state updates directly on the manifold, where Möbius operations introduce significant computational overhead. It also lacks an explicit mechanism for handling varying contextual changes. In contrast, our model's geometry-to-tangent-space pathway enables more stable and efficient computation. Additionally, our novel exogenously-driven SSM provides higher accuracy in more complex recommendation scenarios. Overall, our model outperforms all baselines across the three datasets, underscoring the effectiveness of our hyperbolic Mamba approach for next POI recommendation.
Although our model demonstrates robust performance across all datasets, it achieves only modest improvements on the CA dataset. Upon analyzing the dataset's basic characteristics, we attribute this to the large geographical span of the CA dataset, which results in a sparser and flatter POI hierarchical structure. This diminishes the advantages of hyperbolic geometry. Additionally, our geographical encoding component struggles to effectively extract spatial representations, and the relatively short average check-in sequence length in CA limits the model's ability to fully capture sequential patterns. Despite this, our model still exhibits strong competitiveness in sparser scenarios. This finding is consistent with the theoretical strengths of hyperbolic models: our approach provides the most significant gains where a clear hierarchy exists (NYC, TKY), while performing robustly and competitively where such structure is absent (CA).
\subsection{Ablation Study}
\begin{table}[h]
\centering
\caption{Performance comparison of model variants}
\label{tab:ablation_results}
\resizebox{0.48\textwidth}{!}{%
\renewcommand{\arraystretch}{1.2} % Increase row spacing by scaling factor (1.5x)
{\large % Increase font size
\begin{tabular}{lccccccc}
\toprule
\multirow{2}{*}{Model} & \multicolumn{3}{c}{NYC} & \multicolumn{3}{c}{CA} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
& ND@1 & ND@5 & MRR & ND@1 & ND@5 & MRR \\
\midrule
w/o SSM & 0.1806 & 0.2808 & 0.2657 & 0.1066 & 0.1856 & 0.1768 \\
w/o HE & 0.2105 & 0.3059 & 0.2910 & 0.1374 & 0.2308 & 0.2223 \\
w/o HB & 0.2329 & 0.3745 & 0.3521 & 0.1448 & 0.2405 & 0.2344 \\
w/o STC & 0.2419 & 0.3802 & 0.3594 & 0.1507 & 0.2476 & 0.2407 \\
w/o ATT & 0.2363 & 0.3784 & 0.3563 & 0.1523 & 0.2529 & 0.2429 \\
w/o $\text{U}_\text{c}$ & 0.2394 & 0.3811 & 0.3610 & 0.1508 & 0.2434 & 0.2353 \\
\midrule
Full model & \textbf{0.2569} & \textbf{0.3982} & \textbf{0.3766} & \textbf{0.1594} & \textbf{0.2576} & \textbf{0.2495} \\
\bottomrule
\end{tabular}%
} % End of \large
}
\end{table}
We conducted a comprehensive ablation study to validate the effectiveness of the individual components within our proposed model. Specifically, we designed the following ablation settings:
\begin{itemize}
\item w/o SSM (without State Space Model): This variant removes the geometry-to-tangent-space routing Mamba layer.
\item w/o HE (without Hyperbolic Embedding): This version removes the pre-trained initial hyperbolic embeddings that are learned with rotation.
\item w/o HB (without Hyperbolic space): This variant implements the GTR-Mamba framework entirely in Euclidean space.
\item w/o STC (without Spatio-Temporal Channel): This variant removes the spatio-temporal encoding channel.
\item w/o ATT (without Attention Aggregation): In this setting, we remove the cross-manifold attention fusion mechanism.
\item w/o $\text{U}_\text{c}$ (without Euclidean context): This variant removes the Euclidean contextual information used to exogenously drive the SSM.
\end{itemize}
The ablation study results are reported in Table \ref{tab:ablation_results}, presenting the ND@1, ND@5, and MRR metrics. Using the complete GTR-Mamba model as the baseline, we derive the following observations. The most significant performance degradation across all metrics is observed when the entire Mamba sequence modeling module is removed (w/o SSM), underscoring the critical role of dynamic relational modeling in user trajectories for personalized POI recommendation. The second most impactful variant is the one without initialized embeddings (w/o HE), highlighting the importance of rotation-based modeling for capturing complex semantic relationships. The variant excluding hyperbolic space (w/o HB) also exhibits a notable decline in all three metrics, as Euclidean space fails to effectively capture latent hierarchical structures, demonstrating the advantage of hyperbolic geometry. The variant without Euclidean spatiotemporal information (w/o STC) indicates that Euclidean encodings provide complementary support to the hyperbolic model. Removing SSM-driven information (w/o $\text{U}_\text{c}$) leads to a decline in all metrics, suggesting that our context-driven SSM effectively adapts to varying contexts to enhance recommendation performance. Similarly, the results without cross-manifold attention (w/o ATT) indicate that our attention module is essential for integrating cross-manifold information.
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{1.png}
\caption{Embedding dimension}
\label{d}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{2.png}
\caption{SSM layers}
\label{layer}
\end{subfigure}
\caption{Performance w.r.t. different embedding dimensions
and SSM layers}
\label{sensi}
\end{figure}
\subsection{Sensitivity Analysis}
We conducted experiments to investigate the impact of key parameters on the performance of GTR-Mamba, focusing primarily on the effects of embedding dimension and the number of SSM layers in the context of next POI recommendation. The results are presented in Figure \ref{sensi}. Notably, in our model, the tangent space coordinates of the embeddings in the hyperbolic manifold serve as the state vectors for the Selective State Space Model (SSM). Consequently, the state dimension of the SSM must align with the embedding dimension of the nodes, and thus, we uniformly explore the effect of embedding dimension in these experiments.
\subsubsection{Embedding Dimension}
The embedding dimension significantly influences the model’s performance. To determine the optimal embedding dimension for our model, we evaluated several candidate values (i.e., 32, 64, 96, and 128) and conducted comparative experiments on the NYC and CA datasets. Specifically, we used ND@1, ND@10, and MRR as evaluation metrics, with the results summarized in Figure \ref{d}. As illustrated, the model achieves optimal performance when the embedding dimension is set to 64, with performance beginning to decline for higher dimensions. Therefore, we adopt an embedding dimension of 64 across all datasets used in this study.
\subsubsection{Number of SSM Layers}
The depth of the Mamba module affects both model performance and computational overhead. We investigated the model’s performance with 1, 2, 3, and 4 layers, with results summarized in Figure \ref{layer}. As the number of layers increases, the model’s expressive capacity improves, enabling it to capture higher-order spatiotemporal interactions. However, performance plateaus when the number of layers exceeds 2, while training time and memory consumption increase significantly. Consequently, we select 2 SSM layers as the optimal configuration
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{3.png}
\caption{ACC@5 result}
\label{acc5}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{4.png}
\caption{ACC@10 result}
\label{acc10}
\end{subfigure}
\caption{Performance metrics for different models in scene switching exploration}
\label{scene}
\end{figure}
\subsection{Scene Switching Exploration}
To investigate the adaptability of our proposed GTR-Mamba model to different contexts, we designed a context-switching experiment on the TKY dataset. Context switching refers to the transition of user behavior from one pattern to another, such as shifting from weekday commuting (work routes, office POIs) to weekend leisure activities (shopping, entertainment POIs).
Specifically, we calculated a switching score for each transition based on multiple factors, including time period changes, time intervals, and POI categories. The transition frequency of an entire trajectory was computed as the average of the switching scores across all transition points within the trajectory. Subsequently, the TKY test set was divided into three subsets based on the calculated trajectory transition frequencies: a low-switching subset (frequency \textless 0.15), a medium-switching subset (frequency between 0.15 and 0.4), and a high-switching subset (frequency \textgreater 0.4). The resulting average trajectory transition frequencies were 0.09 for the low-switching subset, 0.26 for the medium-switching subset, and 0.50 for the high-switching subset. To mitigate length bias, we balanced the size of each subset by prioritizing trajectories with lengths close to the median.
We then evaluated our model and baseline models on these subsets for Next POI prediction, using ACC@5 and ACC@10 as performance metrics, with results presented in Figure \ref{scene}. The results indicate that the accuracy on the high-switching subset was significantly lower than on the other two subsets, confirming that the high-switching subset poses greater inference challenges. However, our model exhibited a smaller accuracy decline compared to the baselines. Moreover, our model achieved the best performance on the high-switching subset, i.e., the complex context test set, demonstrating its superior adaptability.
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{rate.png}
\caption{Comparison of robustness}
\label{rate}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{dist.png}
\caption{Distribution of step sizes}
\label{dist}
\end{subfigure}
\caption{More details of scene switching exploration}
\label{all}
\end{figure}
Additionally, we quantified the impact of context switching in Figure \ref{rate} (HM denotes HMamba and GTR denotes GTR-Mamba). Within each subset, we identified points with high switching scores as transition points. For each transition point, we calculated the ranking difference of the true POI in the model’s predicted logits before and after the transition, averaging these differences to obtain a change rate. This change rate was used to measure switching robustness, with results shown in the accompanying figure. Our GTR-Mamba model consistently exhibited the lowest change rate, indicating that the proposed adaptive exogenous-driven Mamba architecture possesses superior switching robustness and adaptability, maintaining greater stability across transition points.
Finally, we quantified the distribution of step sizes across different subsets in Figure \ref{dist}. The high-switching subset exhibited significantly smaller step sizes compared to the low-switching subset, reflecting the model’s adoption of finer-grained state updates to accommodate frequent context switches. In contrast, the low-switching subset had larger step sizes (dt), indicating that more stable trajectories allow for smoother state propagation. These findings validate the effectiveness of GTR-Mamba’s adaptive dt mechanism in handling complex contexts.
\subsection{Efficiency}
Our proposed GTR-Mamba model comprises three components, with their respective time complexities outlined as follows:
\begin{table}[h]
\centering
\begin{tabular}{l l}
\toprule
\textbf{Component} & \textbf{Time Complexity} \\
\midrule
Initialization Embedding & $O(|V| \times d + |\mathcal{E}| \times (K+1) \times d)$ \\
Spatiotemporal Channel & $O(B L r + B L d)$ \\
GTR-Mamba Layer & $O(n B L (d s + d^2))$ \\
\midrule
\textbf{Overall Complexity} & $O(B L (n d s + n d^2 + r + d))$ \\
\bottomrule
\end{tabular}
\caption{Time Complexity Analysis of GTR-Mamba Model Components}
\end{table}
Variable descriptions:
\begin{itemize}
\item \textbf{Initialization Embedding}: Involves computing embeddings for nodes and edges. Here, $|V|$ is the total number of nodes (sum of POIs, categories, regions, and users), $|\mathcal{E}|$ is the number of positive edges (e.g., user-POI, POI-POI), $K$ is the number of negative samples per edge, and $d$ is the embedding dimension. This component is pre-trained and excluded from the overall complexity.
\item \textbf{Spatiotemporal Channel}: Processes spatiotemporal data with batch size $B$, sequence length $L$, embedding dimension $d$, and $r$ RBF anchor points. The complexity arises from computations involving radial basis function (RBF) anchor points and embedding processing.
\item \textbf{GTR-Mamba Layer}: Involves $n$ Mamba layers, with batch size $B$, sequence length $L$, embedding dimension $d$, and state dimension $s$. The primary computational cost comes from the output projection in the Mamba layers.
\item \textbf{Overall Complexity}: Combines the complexities of the spatiotemporal channel and GTR-Mamba layers, excluding the initialization embedding as it is pre-trained.
\end{itemize}
Additionally, we compared the training time of our model against several baseline models across three datasets . To ensure a fair comparison, we measured the total time required for each model to converge under identical experimental conditions, with convergence defined as achieving 95\% of the optimal MRR value. The results, as shown in Figure \ref{time}, demonstrate that our model achieves the best balance between time efficiency and performance. GeoMamba referenced in this context corresponds specifically to $\text{GeoMamba}_{2025}$ , as distinguished from $\text{GeoMamba}_{2024}$ in the baseline comparisons.
Notably, although the Initialize Embedding (Section \ref{emb}) is an independent pre-training step, its computational overhead is minimal. In our experimental environment (NVIDIA GeForce RTX 4090), this pre-training step completed within seconds for all three datasets. Consequently, its time cost is considered negligible.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{time.png}
\caption{Comparison of training time across models}
\label{time}
\end{figure}
\subsection{Visualization}
To intuitively demonstrate the superiority of hyperbolic space in capturing hierarchical structures, we performed a two-dimensional visualization of POI and category embeddings on the NYC dataset. Specifically, we utilized high-dimensional Lorentz embeddings extracted from a pre-trained GTRMamba model and mapped them onto a 2D Poincaré unit disk using a geometric projection method, generating the static visualization results shown in Figure \ref{case}.
Firstly, we converted the high-dimensional embeddings into Poincaré coordinates using a coordinate transformation from the Lorentz model to the Poincaré disk. Subsequently, the Euclidean norm of the Poincaré coordinates was computed as the radius, and the natural angle was calculated based on the first two dimensions of the coordinates, yielding a polar coordinate representation. After converting the polar coordinates to 2D Cartesian coordinates, we plotted the static visualization using these coordinates, where POIs and categories are represented by blue circles and green squares, respectively, and the hierarchical relationships from POIs to their respective categories are depicted by gray connecting lines.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{case.png}
\caption{Visualization of 2D Poincaré disk embeddings for various entities}
\label{case}
\end{figure}
The visualization results clearly demonstrate the hierarchical expressive power of hyperbolic space: the more numerous, lower-level POIs are distributed in the outer regions of the Poincaré disk, while higher-level categories are clustered towards the central region. This distribution pattern stems from the exponential growth property of hyperbolic space, which allows the peripheral areas to accommodate more nodes, thereby effectively modeling large-scale hierarchical structures. This result is consistent with previous findings \cite{nickel2017poincare}, indicating a significant advantage of hyperbolic embeddings in capturing tree-like hierarchical relationships.
\section{CONCLUSION}
\label{sec:c}
In this paper, we propose GTR-Mamba, a novel framework for next POI recommendation, which addresses the limitations of existing models in capturing the hierarchical spatial structures and dynamic temporal contexts within user mobility data. Our framework first leverages hyperbolic geometry to model static, tree-like preference hierarchies. A cross-manifold spatio-temporal channel then fuses these geometric representations with Euclidean contextual features. This fused representation is processed by our core GTR-Mamba layer, which uniquely routes sequence computations to the computationally stable and tractable Euclidean tangent space for Mamba-based updates. This Geometry-to-Tangent Routing mechanism not only ensures numerical stability and preserves linear efficiency , but it also enables this Euclidean context to directly drive the selective state space model (SSM), allowing it to flexibly handle irregular contextual variations. Extensive experiments on three real-world LBSN datasets demonstrate that GTR-Mamba achieves state-of-the-art (SOTA) performance and exhibits superior robustness in high-context-switch scenarios.
%\section{Guidelines for Artificial Intelligence (AI)-Generated Content}
%We used a large language model (ChatGPT, OpenAI) solely for English copyediting, including grammar correction, wording and minor stylistic re-writes, and occasional LaTeX formatting help. The model was not used for idea generation, literature search, data collection/annotation, coding, analysis, or producing results. All scientific claims and contributions were written and verified by the authors, and no non-public data were shared with the model. The authors assume full responsibility for the content of the paper.
\bibliography{IEEEabrv,sample}
\end{document}
|