diff --git "a/D9E2T4oBgHgl3EQfogh1/content/tmp_files/load_file.txt" "b/D9E2T4oBgHgl3EQfogh1/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/D9E2T4oBgHgl3EQfogh1/content/tmp_files/load_file.txt" @@ -0,0 +1,1738 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf,len=1737 +page_content='JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 1 FGAHOI: Fine-Grained Anchors for Human-Object Interaction Detection Shuailei Ma, Yuefeng Wang, Shanze Wang, and Ying Wei Abstractโ€”Human-Object Interaction (HOI), as an important problem in computer vision, requires locating the human-object pair and identifying the interactive relationships between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The HOI instance has a greater span in spatial, scale, and task than the individual object instance, making its detection more susceptible to noisy backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To alleviate the disturbance of noisy backgrounds on HOI detection, it is necessary to consider the input image information to generate ๏ฌne-grained anchors which are then leveraged to guide the detection of HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' However, it is challenging for the following reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘–) how to extract pivotal features from the images with complex background information is still an open question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘–๐‘–) how to semantically align the extracted features and query embeddings is also a dif๏ฌcult issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In this paper, a novel end-to-end transformer-based framework (FGAHOI) is proposed to alleviate the above problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' FGAHOI comprises three dedicated components namely, multi-scale sampling (MSS), hierarchical spatial-aware merging (HSAM) and task-aware merging mechanism (TAM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' MSS extracts features of humans, objects and interaction areas from noisy backgrounds for HOI instances of various scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' HSAM and TAM semantically align and merge the extracted features and query embeddings in the hierarchical spatial and task perspectives in turn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In the meanwhile, a novel training strategy Stage-wise Training Strategy is designed to reduce the training pressure caused by overly complex tasks done by FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In addition, we propose two ways to measure the dif๏ฌculty of HOI detection and a novel dataset, ๐‘–.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๐‘’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', HOI-SDC for the two challenges (Uneven Distributed Area in Human-Object Pairs and Long Distance Visual Modeling of Human-Object Pairs) of HOI instances detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Experiments are conducted on three benchmarks: HICO-DET, HOI-SDC and V-COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Our model outperforms the state-of-the-art HOI detection methods, and the extensive ablations reveal the merits of our proposed contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The code is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='com/xiaomabufei/FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Index Termsโ€”Human-Object Interaction, FGAHOI, Fine-Grained Anchors, Noisy Background, Semantically Aligning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 1 INTRODUCTION H UMAN-Object interaction (HOI) detection, as a downstream task of object detection [1], [2], [3], [4], [5], has recently received increasing attention due to its great application potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For successful HOI detection, it needs to have the ability to understand human activities which are abstracted as a set of triplets in this task, requiring a much deeper understanding for the semantic information of visual scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Without HOI detection, machines can only interpret images as collections of object bounding boxes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', AI systems can only pick up information such as โ€™A man is on the bikeโ€™ or โ€™A bike is in the cornerโ€™, but not โ€™A man rides a bikeโ€™.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Spanning the past and the present, the existing HOI detection approaches [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21] tend to fall into two categories, namely two-stage and one-stage methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Conventional two-stage methods [7], [8], [10], [12], [13], [14], [18], [20], [22], [23], [24], [25], as an intuitive approach, detect human and object instances by leveraging the off-the- Shuailei Ma, Yuefeng Wang are with College of Information Science and Engineering, Northeastern University, Shenyang, China, 110819.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' E-mail: {xiaomabufei, wangyuefeng0203} @gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='com Shanze Wang is with Changsha Hisense Intelligent System Research Institute Co.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', Ltd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' and Information Technology R&D Innovation Center of Peking University, Shaoxing, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' E-mail: szgg0099@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='com Ying Wei is the corresponding author, with College of Information Science and Engineering, Northeastern University, Shenyang, China, 110819.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' E-mail: weiying@ise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='neu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='cn Manuscript received October 26, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' revised January 10, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' FGAHOI Low Level Middle Level High Level Fine-Grained Anchors Attention Weights Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 1: FGAHOI leverages the query embeddings and multi- scale features to generate ๏ฌne-grained anchors and the corresponding weights for HOI instances of diverse scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Then, they guide the decoder to aid key semantic infor- mation of HOI instances to the content embeddings and translate the content embeddings to HOI embeddings for predicting all elements of the HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' shelf object detector [1], [3], [4], utilizing the visual features extracted from the located areas to recognize action classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To fully leverage the visual features, several methods [7], [10], [14], [20], [22], [23], [24], [25] separately extract vi- sual features of human-object pairs and spatial information from the located area in a multi-stream architecture, fusing them in a post-fusion strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In the meanwhile, several approaches [8], [10], [20], [23], [24] employ the existing pose arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='04019v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='CV] 8 Jan 2023 JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 2 estimation methods, such as [26], [27], [28] to extract pose information and fuse it with other features to predict the action class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In addition, some works [8], [12], [13], [18], [29] leverage the graph neural network to extract complex semantic relationship between humans and objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' How- ever, the dif๏ฌculties encountered in the two-stage approach lie mainly in the effective fusion of human-object pairs and complex semantic information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Besides, owing to the limita- tions of the ๏ฌxed detector and some other components (pose estimation etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ), the two-stage method can only achieve a sub-optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To achieve high ef๏ฌciency, one-stage approaches [6], [9], [11], [15], [17], [21], [30], [31] which utilize interaction points between the human-object pairs to simultaneously predict human and object offset vectors and action classes, are proposed to detect human-object pairs and recognize inter- active relationships in parallel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' However, when the human and object in the image are far apart from each other, these methods are disturbed by ambiguous semantic features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The one-stage methods do not achieve much attention until the appearance of the Detection Transformer (DETR) [32] and QPIC [19] applies it for HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Then, plenty of transformer-based works [6], [9], [16], [17], [33] attempt to solve the HOI detection with different encoder-decoder structures and backbone models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In comparison to object instances, HOI instances have a greater span of spatial, scale and task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In most HOI instances, there is a certain distance between human and objects and their scale varies enormously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Compared with simple object classi๏ฌcation, it is necessary to consider more information between human-object pairs rather than the features of humans and objects for interaction classi๏ฌcation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Therefore, the detection is more susceptible to distractions from noisy backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' However, most recent works [19], [33] use object detection frameworks [32], [34] directly for HOI detection by simply adding the interaction classi๏ฌca- tion head, ignoring these problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Inspired by [34] which leverages the reference points to guide the decoding pro- cess, we propose to leverage ๏ฌne-grained anchors to guide the detection of HOI instances and protect it from noisy backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To generate ๏ฌne-grained anchors for kinds of HOI instances, it is obviously necessary to consider the input image features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' There are, however, two inevitable challenges that arise as a result of this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘–) it is dif๏ฌcult to extract pivotal features from the images which contain noisy background information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘–๐‘–) how to semantically align and merge the extracted features with query embeddings is also an open question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In this paper, we propose a novel transformer-based model for HOI detection, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', FGAHOI: Fine-Grained An- chors for Human-Object Interaction Detection (as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' FGAHOI leverages the multi-scale sampling mech- anism (MSS) to extract pivotal features from images with noisy background information for variable HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Based on the sampling strategy and initial anchor gener- ated by the corresponding query embedding, MSS could extract hierarchical spatial features of human, object and the interaction region for each HOI instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Besides, the hi- erarchical spatial-aware (HSAM) and task-aware merging mechanism (TAM) are utilized to semantically align and merge the extracted features with the query embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' HSAM merges the extracted features in the hierarchical spatial perspective according to the cross-attention between the features and the query embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Meanwhile, the extracted features are aligned towards the query embed- dings, according to the cross-attention weights of the merg- ing process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Thereafter, TAM leverages the switches which dynamically switch ON and OFF to merge the input features and query embeddings in the task perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' According to experiment results, we investigate that it is dif๏ฌcult of the end-to-end training approach to allow the transformer-based models to achieve optimal performance when more complex task requirements are required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In- spired by the stage-wise training [35], [36] for LTR [37], we propose a novel stage-wise training strategy for FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' During the training process, we add the important compo- nents of the model in turn to clarify the training direction of the model at each stage, so as to maximize the savings in the training cost of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To the best of our knowledge, there are no measurements for the dif๏ฌculty of detecting HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We investigate that two dif๏ฌculties lie in the detection of human-object pairs, ๐‘–.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๐‘’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', Uneven Distributed Area in Human-Object Pairs and Long Distance Visual Modeling of Human- Object Pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In this paper, we propose two measurements and a novel dataset (HOI-SDC) for these two challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' HOI-SDC eliminates the in๏ฌ‚uence of other factors (Too few training samples of some HOI categories, too tricky interac- tion actions, et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=') on the model training and focuses on the model for these two dif๏ฌcult challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Our contributions can be summarized fourfold: We propose a novel transformer-based human-object interaction detector (FGAHOI) which leverages input features to generate ๏ฌne-grained anchors for pro- tecting the detection of HOI instances from noisy backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We propose a novel training strategy where each component of the model is trained in turn to clar- ify the training direction at each stage, in order to maximize the training cost savings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We propose two ways to measure the dif๏ฌculty of HOI detection and a dataset, ๐‘–.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๐‘’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', HOI-SDC for the two challenges (Uneven Distributed Area in Human- Object Pairs and Long Distance Visual Modeling of Human-Object Pairs) of detecting HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Our extensive experiments on three benchmarks: HICO-DET [38], HOI-SDC and V-COCO [39], demonstrate the effectiveness of the proposed FGA- HOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Speci๏ฌcally, FGAHOI outperforms all existing state-of-the-art methods by a large margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 2 RELATED WORKS Two-stage HOI Detection Approaches: The two-stage HOI detection approaches [7], [8], [10], [12], [13], [14], [18], [20], [22], [23], [24], [25], [29] employ the off-the-shelf object de- tector [1], [3], [4] to localize humans and objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Afterwards, the features of backbone networks inside the human and objects regions are cropped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Part of the two-stage meth- ods [8], [12], [13], [18], [29] treat the human and objects feature as nodes and employ graph neural networks [40] JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 3 Encoder Content Embeddings โ€ฆ Initial Anchor โ€ฆ Positional Encoding Human Box Object box/Class Human Box Object box/Class Verb Class Verb Class HOI Detection Head Decoder Task-Aware Merging Dynamic Switch On/Off โ€ฆ Positional Embeddings Multi-Scale Sampling Strategy Multi-Scale Features Hierarchical Spatial-Aware Merging Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 2: This ๏ฌgure illustrates the overall structure of FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' FGAHOI utilizes a hierarchical backbone and a deformable encoder to extract the semantic features in a multi-scale approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In the decoding phrase, FGAHOI leverages the multi- scale sampling, hierarchical spatial-aware merging and task-aware merging mechanism to align input features with query embeddings and assist the generation of ๏ฌne-grained anchors for the translation of HOI embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' At the back end of the pipeline, HOI detector leverages the HOI embeddings and initial anchor to predict all elements of the HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' to predict action classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The other part of the two-stage approach [7], [10], [14], [20], [22], [23], [24], [25] leverages multi-stream networks to extract diverse information from cropped regions, such as human features, object features, spatial information and human pose information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Then, the information is fused to predict the action in a post- fusion strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Two-stage methods mainly concentrate on predicting the action class in the second stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Nevertheless, the quality of cropped features from the ๏ฌrst stage cannot be guaranteed in most cases, so the method cannot achieve an optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' More importantly, integrating semantic information of human-object pairs requires massive time and computing resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' One-stage HOI Detection Approaches: The traditional one- stage approaches [9], [11], [15], [31] use interaction points or union regions to detect human-object pairs and identify interactive action classes in parallel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' However, these meth- ods which and are hampered by distant human-object pairs, require a gathering and pairing process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' With the creation of DETR [32], one-stage approaches have become the current mainstream.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' QPIC [19] converts the object detection head of DETR into an interaction detection head to predict HOI instance directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' HOITrans [17] combines transformer [41] and CNN [42] to straightly predict HOI instances from the query embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' AS-Net [6] and HOTR [9] each propose a two-branch transformer method that consists of an instance decoder and an interaction decoder to predict the boxes and action classes in parallel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' CDN [16] proposes a cascade disentangling decoder to decode action classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' QAHOI [33] directly combines Swin Transformer [43] and deformable DETR [34] to predict HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Anchor-Based Object Detection Transformer: Deformable DETR [34] ๏ฌrst introduces the reference point concept, where the sampling offset is predicted by each reference point to perform deformable cross-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To facilitate extreme region discrimination, Conditional DETR [44] re- formulates the attention operation and rebuilt positional queries based on reference points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Anchor DETR [45] pro- poses to explicitly capitalize on the spatial prior during cross-attention and box regression by utilizing a prede๏ฌned 2D anchor point [๐‘๐‘ฅ, ๐‘๐‘ฆ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' DAB-DETR [46] extends such a 2D concept to a 4D anchor box [๐‘๐‘ฅ, ๐‘๐‘ฆ, ๐‘ค, โ„Ž] and proposed to re๏ฌne it layer-by-layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' SAM-DETR [47] proposes directly updating content embeddings by extracting salient points from image features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In this paper, we propose a novel decoding process for HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The alignment and ๏ฌne- grained anchor generation is proposed to align the multi- scale features with HOI query embeddings and generate ๏ฌne-grained anchors for the diverse HOI instances with variable spatial distribution, scales and tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Then, the ๏ฌne- grained anchors guide the deformable attention process in aiding key information to query embeddings from noisy backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3 PROPOSED METHOD In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1, we show the overall architecture of FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Then, we describe the multi-scale feature extractor in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We introduce the multi-scale sampling strategy in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The hierarchical spatial-aware, task-aware merging mech- anism and the decoding process is proposed in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2, Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 and Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4, we present the architecture of the HOI detection head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5, the stage-wise training strategy, loss calculation and inference process is illustrated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 Overall Architecture The overall architecture of our proposed FGAHOI is illus- trated in Fig 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For a given image ๐‘ฅ โˆˆ R๐ปร—๐‘Š ร—3, FGAHOI ๏ฌrstly uses a hierarchical backbone network to extract the multi-scale features Z๐‘– โˆˆ R ๐ป 4ร—2๐‘– ร— ๐‘Š 4ร—2๐‘– ร—2๐‘–๐ถ๐‘ , ๐‘– = 1, 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The multi-scale features are then projected from dimension C๐‘  to dimension C๐‘‘ by using 1ร—1 convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' After being ๏ฌ‚attened out, the multi-scale features are concatenated to N๐‘  vectors with C๐‘‘ dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Afterwards, along with JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 4 supplementary positional encoding ๐‘ โˆˆ R๐‘๐‘ ร—๐ถ๐‘‘, the multi- scale features are sent into the deformable transformer en- coder which consists of a set of stacked deformable encoder layers to encode semantic features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The encoded semantic features ๐‘€ โˆˆ R๐‘๐‘ ร—๐ถ๐‘‘ are then acquired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In the decoding process, the content ๐ถ and positional ๐‘ƒ embeddings are both a set of learnable vectors {๐‘ฃ๐‘– | ๐‘ฃ๐‘– โˆˆ R๐‘๐‘‘}๐‘๐‘ž ๐‘–=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The positional embeddings ๐‘ƒ ๏ฌrst generate the initial anchor ๐ด โˆˆ R๐‘๐‘žร—2 according to a linear layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The positional ๐‘ƒ, content ๐ถ embeddings, inital anchor ๐ด and encoded features ๐‘€ are simultaneously sent into the decoder ๐น๐‘‘๐‘’๐‘๐‘œ๐‘‘๐‘’๐‘Ÿ (ยท, ยท, ยท, ยท) which is a set of stacked decoder layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In every decoder layer, the initial anchor ๏ฌrst leverages the multi-scale sampling strategy to sample the multi-scale features corresponding to the content embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The sampled features assist the generation of ๏ฌne-grained anchors and corresponding attention weights through the hierarchical spatial-aware and task-aware merging mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The HOI embeddings ๐ป = {โ„Ž๐‘– | โ„Ž๐‘– โˆˆ R๐‘๐‘‘}๐‘๐‘ž ๐‘–=1 are translated from the query embed- dings ๐‘„ through the ๏ฌne-grained anchors, attention weights and the deformable attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The HOI embeddings ๐ป are acquired as ๐ป = ๐น๐‘‘๐‘’๐‘๐‘œ๐‘‘๐‘’๐‘Ÿ (๐‘€, ๐‘ƒ, ๐ถ, ๐ด).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Eventually, the HOI detector leverages the HOI embeddings ๐ป and initial anchor to predict the HOI instances < ๐‘โ„Ž, ๐‘๐‘œ, ๐‘๐‘œ, ๐‘๐‘ฃ >, where ๐‘โ„Ž, ๐‘๐‘œ, ๐‘๐‘œ and ๐‘๐‘ฃ stands for the human box coordinate (๐‘ฅ, ๐‘ฆ, ๐‘ค, โ„Ž), object box coordinate, object class and verb class, respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 Multi-Scale Features Extractor High-quality visual features are a prerequisite for successful HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For extracting the multi-scale features with long-range semantic information, FGAHOI leverages the multi-scale feature extractor which consists of a hierarchical backbone network and a deformable transformer encoder to extract features, the folumation is as Equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1: ๐‘€ = ๐น๐‘’๐‘›๐‘๐‘œ๐‘‘๐‘’๐‘Ÿ (๐น๐‘“ ๐‘™๐‘Ž๐‘ก๐‘ก๐‘’๐‘›(๐œ™(๐‘ฅ)), ๐‘, ๐‘ , ๐‘Ÿ, ๐‘™) โˆˆ R๐‘๐‘ ร—๐ถ๐‘‘, (1) where ๐น๐‘’๐‘›๐‘๐‘œ๐‘‘๐‘’๐‘Ÿ (ยท), ๐น๐‘“ ๐‘™๐‘Ž๐‘ก๐‘ก๐‘’๐‘›(ยท) and ๐œ™(ยท) denotes the encoder, ๏ฌ‚atten operation and backbone network, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘ is the position encoding, ๐‘  is the spatial shape of the multi- scale features, ๐‘Ÿ stands for the valid ratios and ๐‘™ represents the level index corresponding the multi-scale features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The hierarchical backbone network is ๏ฌ‚exible and can be com- posed of any convolutional neural network [42], [48], [49], [50] and transformer backbone network [43], [51], [52], [53], [54], [55], [56], [57].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' However, CNN is poor at capturing non-local semantic features like the relationships between humans and objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In this paper, we mainly use Swin Transformer tiny and large version [43] to enhance the ability of feature extractor for extracting long-range features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 Why FGAHOI Decodes Better?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' During the decoding process, the ๏ฌne-grained anchors can be regarded as a positional prior to let decoder focus on the region of interest, directly guiding the decoder to aid semantic information to the content embeddings which are used to predict all elements of the HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Therefore, ๏ฌne-grained anchors play the following two crucial roles in HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘–) Fine-grained anchors directly determine whether the information gained from input features to content embeddings is instance-critical or noisy background information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘–๐‘–) Fine-grained anchors determine the quality of alignment between the query embeddings and multi- scale features of input scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Both are crucial factors for the quality of decoding results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The existing methods [33], [34] directly utilize the query embedding to generate ๏ฌne- grained anchors based on the initial anchor, without consid- ering the multi-scale features of the input scenarios and the semantic alignment between the query embedding and the input features at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Our FGAHOI proposes a novel ๏ฌne- grained anchors generator which consists of multi-scale sampling, hierarchical spatial-aware merging and task- aware merging mechanism (as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The gen- erator adequately leverages the initial anchor, multi-scale features and query embeddings for generating suitable ๏ฌne- grained anchors for diverse input scenarios and aligning semantic information between different input scenarios and query embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The formulation of FGAHOI decoding process is as follows: ๐ป = Defattn(Task(Hier Spatial({๐‘ฅ๐‘– ๐‘ }, ๐ถ๐‘ข), ๐ถ๐‘ข), ๐‘€, ๐ถ๐‘ข), (2) where ๐ถ๐‘ข is the content embeddings updated by the po- sitional embeddings, Defattn represents the deformable at- tention, ๐‘ฅ๐‘– ๐‘  represents the sampled features of the ๐‘–-th level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘€ is the encoded input features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 Multi-Scale Sampling Mechanism The HOI instances contained in the input scenarios usually vary in size, where some instances taking up most of the area in the input scenarios and others occupying perhaps only a few pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Our FGAHOI aims at detecting all in- stances in the scene, regardless of the size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Therefore, when using the initial anchor to sample the multi-scale features, for shallow features mainly used to detect instances of small size, the sampling strategy only samples a small range of features around the initial anchor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In contrast, for deep features mainly used to detect instances of large size, the sampling strategy samples a large range of features around the initial anchor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 (b), in the generator, the encoded features are ๏ฌrst reshaped to the original shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Based on the initial anchor, generator leverages the sam- pling strategy to sample multi-scale features as follows: ๐‘ฅ๐‘– ๐‘  =๐น๐‘ ๐‘Ž๐‘š๐‘๐‘™๐‘’( ๐‘Ÿ๐‘’๐‘ โ„Ž๐‘Ž๐‘๐‘’(๐‘€)๐‘–, ๐ด, ๐‘ ๐‘–๐‘ง๐‘’๐‘–, ๐‘๐‘–๐‘™๐‘–๐‘›๐‘’๐‘Ž๐‘Ÿ ), (3) where ๐‘ ๐‘–๐‘ง๐‘’๐‘– (๐‘– = 0, 1, 2) denotes the sampling size of the ๐‘–-th level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘€ is the encoded input features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐ด is the ini- tial anchor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Inspired by [58], we utilize bilinear interpolation in the sampling strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 Hierarchical Spatial-Aware Merging Mechanism In order to better utilize the hierarchical spatial informa- tion of sampled features for aligning content embeddings with the sampled features, we propose a novel hierarchical spatial-aware merging mechanism (HSAM) which utilizes the content embeddings to extract hierarchical spatial in- formation and merge the sampled features, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The content embeddings are ๏ฌrst updated by the positional embeddings and multi-head self-attention mech- anism as follows: ๐ถ๐‘ข = ๐ถ + ๐นMHA ๏ฟฝ (๐ถ + ๐‘ƒ)๐‘Š๐‘ž, (๐ถ + ๐‘ƒ)๐‘Š ๐‘˜, ๐ถ๐‘Š ๐‘ฃ๏ฟฝ , (4) JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' JANUARY 2023 5 Multi-Scale Sampling Strategy Src Shape Anchor Sampling Low High Middle Positional Embeddings Multi-Head Self-Attention Add & Norm Deformable Multi-Head Cross-Attention FFN Add & Norm Add & Norm Encoded Multi-Scale Features Alignment & Fine-Grained Anchor Generation Query Embeddings (๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='V ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='K ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Q ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='V ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Fined-grained ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Anchors ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Initial Anchor ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Weights ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Updated ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Content ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Embeddings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Fine-grained ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Anchors ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Corresponding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Attention Weights ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Linear ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Linear ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='SoftMax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Generation of Fine-Grained Anchors ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Reshape ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Reshape ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๏ผˆb๏ผ‰ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๏ผˆa๏ผ‰ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๏ผˆd๏ผ‰ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๏ผˆe๏ผ‰ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Hierarchical Spatial-Aware ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Merging Mechanism ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Middle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Multi-Head ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Flatten ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Low ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='High ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Multi-Head Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='CAT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Positional ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Embeddings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Content ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Embeddings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='โ€ฆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Task-Aware Merging Mechanism ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Dynamic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Switch On/Off ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Cross-Attn[( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ] Linear Linear RELU Normalize Updated Content Embeddings ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ) ๏ผˆc๏ผ‰ Middle Low High Merge Features Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3: The architecture of FGAHOIโ€™s decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (a) Illustration of FGAHOIโ€™s decoding process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (b) Illustration of Multi- scale sampling mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (c) Illustration of Hierarchical spatial-aware merging mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (d) Illustration of Task-aware merging mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (e) Generation process of ๏ฌne-grained anchors and the corresponding attention weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' where ๐‘Š๐‘ž, ๐‘Š ๐‘˜ and ๐‘Š ๐‘ฃ denotes the parameter matrices for query, key and value in the self-attention mechanism, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐นMHA(ยท) is the multi-head attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐ถ and ๐‘ƒ represents the content and position embeddings, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Then, the updated content embeddings are leveraged to merge the sampled features, the formulation is as follows: ๐‘ฅ๐‘– ๐‘š = ๐นconcat ๏ฟฝhead1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' , headNH ๏ฟฝ ๐‘Š๐‘‚, where headn = Softmax ๏ฟฝ (๐ถ๐‘ข๐‘Š๐‘ž n ) ยท (๐‘ฅ๐‘– ๐‘ ๐‘Š ๐‘˜ n )๐‘‡ โˆš๐‘‘๐‘˜ ๏ฟฝ (๐‘ฅ๐‘– ๐‘ ๐‘Š ๐‘ฃ n ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (5) Where ๐‘ฅ๐‘– ๐‘š represents the merged features of the ๐‘–-th level sampled features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐ถ๐‘ข is the content embeddings updated by the positional embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘Š๐‘‚ denotes the parameter matrices for multi-head concatenation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘Š๐‘ž ๐‘› , ๐‘Š ๐‘˜ ๐‘› and ๐‘Š ๐‘ฃ ๐‘› denote the parameter matrices for query, key and value of n-th attention head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐นconcat is the concatenating operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘‘๐‘˜ = ๐‘โ„Ž๐‘‘ ๐‘๐ป , ๐‘โ„Ž๐‘‘ is the hidden dimensions, and ๐‘๐ป is the number of attention head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Following the merging of the sampled features at each scale based on spatial information, the merged features at each scale are ๏ฌrst concatenated together as follows: ๐‘‹๐‘š = ๐นconcat({๐‘ฅ๐‘– ๐‘š}๐‘–=0,1,2) โˆˆ R๐ตร—๐‘๐‘žร—๐‘๐ฟร—๐‘โ„Ž๐‘‘, (6) where ๐‘๐ฟ is the number of multi-scale, ๐‘ฅ๐‘– ๐‘š represents the merged features of the ๐‘–-th level sampled features, ๐‘‹๐‘š is the concatenated multi-scale features and merged by the scale- aware merging mechanism as follows: ๐‘‹๐‘ข = ๐นconcat (head1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' , headh) ๐‘Š๐‘‚, where headn = Softmax ๏ฟฝ (๐ถ๐‘ข๐‘Š๐‘ž n ) ยท (๐‘‹๐‘š๐‘Š ๐‘˜ n )๐‘‡ โˆš๐‘‘๐‘˜ ๏ฟฝ (๐‘‹๐‘š๐‘Š ๐‘ฃ n ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (7) Where ๐‘‹๐‘ข is the merged multi-scale features for updating the content embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 Task-Aware Merging Mechanism Considering diverse HOI instances, the task-aware merging mechanism is proposed to fuse the merged multi-scale features and content embeddings and align the content embeddings with the merged feature in the task-aware perspective, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 (e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' It leverages the merged multi-scale features and content embeddings to generate dynamic switch for selecting suitable channel in the merging process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Content embedding and multi-scale information after fusion are ๏ฌrst stitched together, the formulation is as follows: ๐‘‹ = ๐น๐‘ ๐‘ก๐‘Ž๐‘๐‘˜ (๐ถ๐‘ข, ๐‘‹๐‘ข) โˆˆ R๐ตร—๐‘๐‘žร—(2ร—๐‘โ„Ž๐‘‘).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (8) Where ๐ถ๐‘ข is the content embeddings updated by the posi- tional embeddings, ๐‘‹๐‘ข is the merged multi-scale features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Thereafter, we use cross-attention mechanism to update these as follows: ๐‘‹๐‘ ๐‘ค๐‘–๐‘ก๐‘โ„Ž = ๐นconcat (head1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' , headh) ๐‘Š๐‘‚, where headn = Softmax ๏ฟฝ (๐ถ๐‘ข๐‘Š๐‘ž n ) ยท (๐‘‹๐‘Š ๐‘˜ n )๐‘‡ โˆš๐‘‘๐‘˜ ๏ฟฝ (๐‘‹๐‘Š ๐‘ฃ n ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (9) JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 6 TABLE 1: Instance statistics of two dif๏ฌculties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We quantify all the instances in the HAKE-HOI [20] dataset according to two newly proposed metrics and divide them into ten intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dataset IMI IMI0 IMI1 IMI2 IMI3 IMI4 IMI5 IMI6 IMI7 IMI8 IMI9 HAKE-HOI num๐ด๐‘… 104243 65499 44303 31241 21982 11888 4670 1818 598 168 num๐ฟ๐‘… 424 1243 1784 3043 8668 70191 83314 79427 34017 4299 SDC Train num๐ด๐‘… 62526 30235 16346 12013 10269 11189 4223 1540 423 139 num๐ฟ๐‘… 177 515 874 1656 5208 48798 38517 29544 20265 3349 SDC Test num๐ด๐‘… 24737 0 0 0 0 0 0 0 0 0 num๐ฟ๐‘… 153 415 464 834 2704 20167 0 0 0 0 Then,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' the generated information is utilized to gain the dynamic switch for merging,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' the formulation is as follows: ๐‘†๐‘ค๐‘–๐‘ก๐‘โ„Ž๐›พ = ๐น๐‘›๐‘œ๐‘Ÿ๐‘š๐‘Ž๐‘™๐‘–๐‘ง๐‘’(๐น๐‘š๐‘™๐‘(๐‘‹๐‘ ๐‘ค๐‘–๐‘ก๐‘โ„Ž))๐›พ โˆˆ R๐ตร—๐‘๐‘žร—2ร—2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (10) where ๐‘†๐‘ค๐‘–๐‘ก๐‘โ„Ž๐›พ is the dynamic switch for ๐›พ-th dimension of the merged features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐นโ„Ž๐‘ ๐‘–๐‘”๐‘š๐‘œ๐‘–๐‘‘(ยท) and ๐น๐‘š๐‘™๐‘(ยท) denote the hard sigmoid and feed forward network which consists of two linear layers and one Relu activation layer, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Inspired by [59], the merging mechanism is designed as follows: ๐‘ˆ๐›พ = ๐น๐‘€ ๐‘Ž๐‘ฅ{๐‘†๐‘ค๐‘–๐‘ก๐‘โ„Ž๐›พ ๐‘–,0 โŠ™ ๐‘‹๐›พ ๐‘ข + ๐‘†๐‘ค๐‘–๐‘ก๐‘โ„Ž๐›พ ๐‘–,1}๐‘–=0,1 + ๐ถ๐›พ ๐‘ข , (11) where ๐‘ˆ๐›พ is ๐›พ-th features of content embeddings updated by the merged multi-scale features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐น๐‘€ ๐‘Ž๐‘ฅ is the max operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' โ€ฆ Linear Linear MLP MLP Object Class Action Class Human Box Object Box HOI Instances Initial Anchor HOI Embeddings Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 4: The prediction process of the HOI detection head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' See sec 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4 Decoding with Fine-Grained Anchor As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 (e), the updated content embeddings are used to generate ๏ฌne-grained anchors and attention weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' According to the linear layer, reshape operation and softmax function, the formulation is as follows: A = ๐น๐‘™๐‘–๐‘›&๐‘Ÿ๐‘’๐‘ (๐‘ˆ) โˆˆ R๐ตร—๐‘๐‘žร—๐‘๐ป ร—๐‘๐ฟร—๐‘Aร—2, (12) W = ๐น๐‘™๐‘–๐‘›&๐‘Ÿ๐‘’๐‘ &๐‘ ๐‘œ ๐‘“ ๐‘ก (๐‘ˆ) โˆˆ R๐ตร—๐‘๐‘žร—๐‘๐ป ร—๐‘๐ฟร—๐‘A, (13) As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 (a), the ๏ฌne-grained anchors and at- tention weights are utilized to aid semantic features from the encoded features of the input scenarios to the content embeddings, the formulation is as follows: P๐‘ž = ๐‘๐ป โˆ‘๏ธ ๐‘›=1 ๐‘พ๐‘› ๏ฟฝ ๐‘๐ฟ โˆ‘๏ธ ๐‘™=1 ๐‘A โˆ‘๏ธ ๐‘˜=1 W๐‘™ ๐‘›๐‘ž๐‘˜ ยท ๐‘พโ€ฒ ๐‘›๐’™๐’ ๏ฟฝ A๐‘™ ๐‘›๐‘ž๐‘˜ ๏ฟฝ๏ฟฝ , (14) where P๐‘ž is the extracted semantic information used for translating ๐‘ž-th content to HOI embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' A๐‘™ ๐‘›๐‘ž๐‘˜ and W๐‘™ ๐‘›๐‘ž๐‘˜ represent the ๐‘˜-th ๏ฌne-grained anchors and corre- sponding attention weights of the ๐‘›-th attention head for the ๐‘ž-th query embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Both ๐‘Š๐‘› and ๐‘Š โ€ฒ ๐‘› are parameter matrices of the ๐‘›-th attention head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘A is the number of ๏ฌne-grained anchors of each scale in one attention head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4 HOI Detection Head FGAHOI leverages a simple HOI detection head to predict all elements of HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4, the detec- tion head utilizes the HOI embeddings and the initial anchor to localize the human and object boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In this process, each initial anchor acts as the base point for the bounding boxes of the corresponding pair of a human and an object, the formulation is as follows: ๐‘โ„Ž = ๐น๐‘š๐‘™๐‘(๐ป)[ยท ยท ยท , : 2] + ๐‘–๐‘›๐‘–๐‘ก๐‘–๐‘Ž๐‘™ ๐‘Ž๐‘›๐‘โ„Ž๐‘œ๐‘Ÿ โˆˆ R๐‘๐‘žร—4, (15) ๐‘๐‘œ = ๐น๐‘š๐‘™๐‘(๐ป)[ยท ยท ยท , : 2] + ๐‘–๐‘›๐‘–๐‘ก๐‘–๐‘Ž๐‘™ ๐‘Ž๐‘›๐‘โ„Ž๐‘œ๐‘Ÿ โˆˆ R๐‘๐‘žร—4, (16) ๐‘๐‘œ = ๐น๐‘™๐‘–๐‘›๐‘’๐‘Ž๐‘Ÿ (๐ป) โˆˆ R๐‘๐‘žร—๐‘›๐‘ข๐‘š๐‘œ, (17) ๐‘๐‘ฃ = ๐น๐‘™๐‘–๐‘›๐‘’๐‘Ž๐‘Ÿ (๐ป) โˆˆ R๐‘๐‘žร—๐‘›๐‘ข๐‘š๐‘ฃ, (18) where ๐น๐‘š๐‘™๐‘ denotes the feed forward network consists of three linear layers and three relu activation layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐น๐‘™๐‘–๐‘›๐‘’๐‘Ž๐‘Ÿ stands for the linear layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘›๐‘ข๐‘š๐‘œ and ๐‘›๐‘ข๐‘š๐‘ฃ are the number of object and action classes, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐ป denotes the HOI embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5 Training and Inference 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 Stage-wise Training Inspired by the stage-wise training approach [35], [36] which decouples feature learning and classi๏ฌer learning into two independent stages for LTR [37], we propose a novel stage- wise training strategy for FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We start by training the base network (FGAHOI without any merging mecha- nism) in an end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We then add the merging mechanism in turn to the trained base network for another short period of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In this phrase, the parameters of the trained base network are leveraged as pretrained parameters and no parameters are ๏ฌxed during the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' JANUARY 2023 7 ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' fly,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit_on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' exit,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' direct airplane ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' straddle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' run,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' race horse lasso cow carry handbag wear backpack hold,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' stand_under umbrella ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' race,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' run straddle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold horse fly,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' pull kite sail,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit_on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' stand_on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' drive boat race,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' turn,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' straddle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold motorcycle wear tie scratch,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' walk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' pet,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' train dog ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' drive,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' board bus carry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' wear,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold backpack serve,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hit sports_ball swing tennis_racket direct,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' inspect,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' fly airplane type on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' read,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold laptop sit on couch brush_with,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold toothbrush kick,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' block,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hit,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' inspect,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' dribble sports_ball stand_on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' jump,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold skateboard ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' fly,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit_on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' exit,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' direct airplane ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' straddle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' run,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' race horse lasso cow carry handbag wear backpack hold,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' stand_under umbrella ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' race,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' run straddle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold horse fly,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' pull kite sail,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit_on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' stand_on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' drive boat race,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' turn,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' straddle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold motorcycle wear tie scratch,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' walk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' pet,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' train dog ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' drive,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' board bus carry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' wear,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold backpack serve,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hit sports_ball swing tennis_racket direct,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' inspect,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' sit on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' fly airplane type on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' read,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold laptop sit on couch brush_with,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold toothbrush kick,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' block,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hit,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' inspect,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' dribble sports_ball stand_on,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ride,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' jump,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' hold skateboard Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5: Visualization of HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Humans and objects are represented by pink and blue bounding boxes respectively, and interactions are marked by grey lines linking the box centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(a) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(b) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(c) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='catch sport ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='watch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='bird ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='kick ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='sports ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hit sport ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='fly kite ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hit sport ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='wear tie ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='swing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='tennis_racket ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='sit_on ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='toilet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='carry surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='flip ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='catch sport ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='watch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='bird ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='kick ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='sports ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hit sport ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='fly kite ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hit sport ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='wear tie ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='swing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='tennis_racket ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='sit_on ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='toilet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='carry surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='flip ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(a) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(b) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(c) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='catch sport ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='watch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='bird ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='kick ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='sports ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hit sport ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='fly kite ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hit sport ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='wear tie ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='swing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='tennis_racket ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='sit_on ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='toilet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='carry surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='ride surfboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='flip ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 6: (a) illustrates the excellent long-range visual mod- elling capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (b) demonstrates remarkable robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (c) shows the superior capabilities for identifying small HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 Loss Calculation Inspired by the set-based training process of HOI-Trans [17], QPIC [19], CDN [16] and QAHOI [33], we ๏ฌrst use the bipartite matching with the Hungarian algorithm to match each ground truth with its best-matching prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For subsequent back-propagation, a loss is then established between the matched predictions and the matching ground truths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The folumation is as follows: ๐ฟ = ๐œ†๐‘œ๐ฟ๐‘œ ๐‘ + ๐œ†๐‘ฃ ๐ฟ๐‘ฃ ๐‘ + โˆ‘๏ธ ๐‘˜ โˆˆ(โ„Ž,๐‘œ) ๏ฟฝ ๐œ†๐‘๐ฟ๐‘˜ ๐‘ + ๐œ†๐บ๐ผ๐‘œ๐‘ˆ ๐ฟ๐‘˜ ๐บ๐ผ๐‘œ๐‘ˆ ๏ฟฝ , (19) where ๐ฟ๐‘œ ๐‘ and ๐ฟ๐‘ฃ ๐‘ represent the object class and action class loss, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We utilize the modi๏ฌed focal loss function [60] and sigmoid focal loss function [61] for ๐ฟ๐‘ฃ ๐‘ and ๐ฟ๐‘œ ๐‘, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐ฟ๐‘ is the box regression loss and consists of the ๐ฟ1 Loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐ฟ๐บ๐ผ๐‘‚๐‘ˆ denotes the intersection-over-union loss, the same as the function in QPIC [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐œ†๐‘œ, ๐œ†๐‘ฃ, ๐œ†๐‘ and ๐œ†๐บ๐ผ๐‘œ๐‘ˆ are the hyper parameters for adjusting the weights of each loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 Inference The inference process is to composite the output of the HOI detection head to form HOI triplets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Formally, the ๐‘–-th out- put prediction is generated as < ๐‘โ„Ž ๐‘– , ๐‘๐‘œ ๐‘– , ๐‘Ž๐‘Ÿ๐‘”๐‘š๐‘Ž๐‘ฅ๐‘˜๐‘โ„Ž๐‘œ๐‘– ๐‘– (๐‘˜) >.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The HOI triplet score ๐‘โ„Ž๐‘œ๐‘– ๐‘– combined by the scores of action ๐‘๐‘ฃ ๐‘– and object ๐‘๐‘œ ๐‘– classi๏ฌcation, formularized as ๐‘โ„Ž๐‘œ๐‘– ๐‘– = ๐‘๐‘ฃ ๐‘– ยท ๐‘๐‘œ ๐‘– .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 4 PROPOSED DATASET There are two main dif๏ฌculties existing with human-object pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘–) Uneven size distribution of human and objects in human-object pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐‘–๐‘–) Excessive distance between person and object in human-object pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To the best of our knowl- edge, there are no relevant metrics to measure these two dif๏ฌculties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In this paper, we propose two metrics ๐ด๐‘… and ๐ฟ๐‘… for measuring these two dif๏ฌculties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Then two novel challenges corresponding to these two dif๏ฌculties are pro- posed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In addition, we propose a novel Set for these Double Challenges (HOI-SDC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The data is selected from HAKE- HOI [20] which is re-split from HAKE [62] and provides 110K+ images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' HAKE-HOI has 117 action classes, 80 object classes and 520 HOI categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' CB็ฆCREERSBK2 M WITDBWEJOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 8 FGAHOI QAHOI FGAHOI QAHOI FGAHOI QAHOI Fine-Grained Anchors #1 Fine-Grained Anchors #2 Fine-Grained Anchors #3 Fine-Grained Anchors #4 Fine-Grained Anchors #5 Fine-Grained Anchors #6 Fine-Grained Anchors #7 Fine-Grained Anchors #8 HOI Instance Hold Sport Ball Ride Motorcycle Fly Kite Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 7: Comparison of ๏ฌne-grained anchors between FGAHOI and QAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We visualize the ๏ฌne-grained anchors corresponding to all attention heads and the corresponding attention weights, where the shades of colors correspond to the magnitude of the weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Obviously, FGAHOI is more accurate in focusing on humans, objects and interaction areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 HOI-UDA We propose a novel measurement for the challenge of Uneven Distributed Area in Human-Object Pairs, the formulation is as follow: ๐ด๐‘… = ๐ด๐‘Ÿ๐‘’๐‘Žโ„Ž ยท ๐ด๐‘Ÿ๐‘’๐‘Ž๐‘œ ๐ด๐‘Ÿ๐‘’๐‘Ž2 โ„Ž๐‘œ๐‘– , (20) where ๐ด๐‘Ÿ๐‘’๐‘Žโ„Ž, ๐ด๐‘Ÿ๐‘’๐‘Ž๐‘œ and ๐ด๐‘Ÿ๐‘’๐‘Žโ„Ž๐‘œ๐‘– denote the area of human, object and HOI instances, respectively (as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='8 (a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We quantify all the instances in the HAKE-HOI into ten intervals and count the number of instances of each interval in the second and ๏ฌfth row of Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To better evaluate the ability of the model to detect HOI for human-object pairs with uneven distributed areas, we specially select 24737 HOI instances of IMIUDA 0 in testing set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 HOI-LDVM A novel measurement for the challenge of Long Distance Visual Modeling of Human-Object Pairs is proposed in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ๐ฟ๐‘… = ๐ฟโ„Ž + ๐ฟ๐‘œ ๐ฟโ„Ž๐‘œ๐‘– , (21) where ๐ฟโ„Ž, ๐ฟ๐‘œ and ๐ฟโ„Ž๐‘œ๐‘– denote the size we de๏ฌne of human, object and HOI instances, respectively (as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='8 (b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The instances are quanti๏ฌed in the third and sixth row of Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To better evaluate the ability of the model to de- tect HOI for human-object pairs with with long distance, we specially select 24737 HOI instances of IMILDVM 0 โˆผ IMILDVM 6 in testing set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 HOI-SDC In order to avoid the training process of the model being in๏ฌ‚uenced by a portion of HOI classes with a very small number of instances, we remove some of the HOI classes containing a very small number of instances and HOI classes with no interaction from the training Set for the Double Challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Finally, there are total 321 HOI classes, JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 9 TABLE 2: Performance comparison with the state-of-the-art methods on the HICO-DET dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' โ€™Vโ€™, โ€™Sโ€™, โ€™Pโ€™ and โ€™Lโ€™ represent the visual feature, spatial feature, human pose feature and language feature respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fine-tuned Detection means the parameter of the model is pre-trained on the MS-COCO dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Backbone with โ€™*โ€™ and โ€™+โ€™ means that they are pre-trained on ImageNet-22K with 384ร—384 input resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' QAHOI(R) represents that the results are reproduced on the same machine with our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Architecture Method Backbone Fine-tuned Feature Default (โ†‘) Known Object (โ†‘) Full Rare Non-Rare Full Rare Non-Rare Two-Stage Methods Multi-stream No-Frill [23] ResNet-152 \x17 A+S+P 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='18 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='17 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='08 PMFNet [24] ResNet-50-FPN \x17 A+S 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='46 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='65 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='00 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='34 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='47 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='20 ACP [25] ResNet-101 \x14 A+S+L 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='96 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='43 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='62 PD-Net [10] ResNet-152 \x17 A+S+P+L 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='37 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='61 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='79 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='86 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='70 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='44 VCL [7] ResNet-50 \x14 A+S 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='63 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='21 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='55 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='98 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='12 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='03 Graph-Based RPNN [8] ResNet-50 \x17 A+P 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='35 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='78 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='71 VSGNet [13] ResNet-152 \x17 A+S 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='80 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='05 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='91 DRG [12] ResNet-50-FPN \x14 A+S+L 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='53 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='47 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='04 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='98 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='14 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='43 SCG [18] ResNet-50-FPN \x14 A+S 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='33 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='72 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='31 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='37 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='18 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='50 One-Stage Methods Interaction points IP-Net [15] ResNet-50-FPN \x17 A 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='56 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='79 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='58 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='05 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='77 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='92 PPDM [31] Hourglass-104 \x14 A 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='73 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='78 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='10 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='58 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='65 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='84 GGNet [11] Hourglass-104 \x14 A 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='47 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='48 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='60 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='36 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='23 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='48 Transformer-Based HOITrans [17] ResNet-101 \x14 A 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='60 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='15 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='54 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='98 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='57 HOTR [9] ResNet-50 \x17 A 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='46 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='21 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='65 ResNet-50 \x14 A 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='10 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='34 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='42 AS-Net [6] ResNet-50 \x17 A 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='40 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='39 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='01 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='41 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='44 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='00 ResNet-50 \x14 A 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='87 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='25 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='25 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='74 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='07 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='14 QPIC [19] ResNet-50 \x14 A 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='07 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='85 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='23 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='68 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='14 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='93 ResNet-50 \x17 A 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='21 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='51 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='21 QAHOI [33] Swin-Tiny \x17 A 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='47 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='44 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='27 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='99 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='83 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='84 Swin-Largeโˆ— + \x17 A 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='78 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='80 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='56 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='59 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='66 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='36 QAHOI (R) Swin-Tiny \x17 A 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='67 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='22 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='69 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='06 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='95 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='18 Swin-Largeโˆ— + \x17 A 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='43 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='22 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='29 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='23 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='01 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='09 FGAHOI Swin-Tiny \x17 A 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='94 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='48 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='16 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='97 Swin-Largeโˆ— + \x17 A 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='18 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='71 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='11 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='93 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='93 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='02 74 object classes and 93 action classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The training and testing set contain 37,155 and 9,666 images, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The detailed distribution of HOI instances is shown in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (b) (a) ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 8: Proposed metrics for the dif๏ฌculties existing with HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (a) Metric for uneven size distribution of humans and objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (b) Metric for excessive distance between person and object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5 EXPERIMENTS 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 Dataset Experiments are conducted on three HOI datasets: HICO- DET [38], V-COCO [39] and HOI-SDC dataset HICO-DET [38] has 80 object classes, 117 action classes and 600 HOI classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' HICO-DET offers 47,776 images with TABLE 3: Performance comparison with the state-of-the-art methods on the HOI-SDC dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dataset Backbone Method mAProle (โ†‘) HOI-SDC Swin-Tiny QAHOI 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='55 Swin-Tiny Baseline 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='18 Swin-Tiny +HSAM 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='91 Swin-Tiny +TAM 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='84 Swin-Tiny FGAHOI 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='25 151,276 HOI instances, including 38,118 images with 117,871 annotated instances of human-object pairs in the training set and 9658 images with 33,405 annotated instances of human- object pairs in the testing set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' According to the number of these HOI classes, the 600 HOI classes in the dataset are grouped into three categories: Full (all HOI classes), Rare (138 classes with fewer than ten instances) and Non- Rare (462 classes with more than ten instances).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Following HICO [63], we consider two different evaluation settings (the results are shown in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2: (1) Known object settings: For each HOI category (such as โ€™๏ฌ‚ying a kiteโ€™), the detection is only evaluated on the images that contain the target object category (such as โ€™kiteโ€™).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The dif๏ฌculty lies in the local- JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 10 TABLE 4: Performance comparison with the state-of-the-art methods on the V-COCO dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Method AP๐‘†1 role (โ†‘) AP๐‘†2 role (โ†‘) Two-stage Method VSG-Net 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='8 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='0 PD-Net 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='0 ACP 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 One-stage Method HOITrans 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='9 AS-Net 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='9 HOTR 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4 DIRV 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 QAHOI(R-50) 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7 FGAHOI(R-50) 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='0 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 FGAHOI(Swin-T) 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 ization of HOI (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' human-kite pairs) and distinguishing the interaction (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' โ€™๏ฌ‚yingโ€™).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' (2) Default setting: For each HOI category, the detection is evaluated on the whole test set, including images containing and without target object categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' This is a more challenging setting because we also need to distinguish background images (such as images without โ€™kiteโ€™).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' V-COCO [39] contains 80 different object classes and 29 action categories and is developed from the MS-COCO dataset, which includes 4,946 images for the test subset, 2,533 images for the train subset and 2,867 images for the validation subset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The objects are divided into two types: โ€œobjectโ€ and โ€œinstrumentโ€.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 Metric Following the standard evaluation [21], [39], we use role mean average precious to evaluate the predicted HOI in- stances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' A detected bounding box is considered a true positive for object detection if it overlaps with a ground truth bounding box of the same class with an intersection greater than union (๐ผ๐‘‚๐‘ˆ) greater than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In HOI detection, we need to predict human-object pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The human-object pairs whose human overlap ๐ผ๐‘‚๐‘ˆโ„Ž and object overlap ๐ผ๐‘‚๐‘ˆ๐‘œ both exceed 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', min (๐ผ๐‘‚๐‘ˆโ„Ž, ๐ผ๐‘‚๐‘ˆ๐‘œ) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5 are declared a true positive (as shown in Fig 9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Speci๏ฌcally, for HICO- DET, besides the full set of 600 HOI classes, the role mAP over a rare set of 138 HOI classes that have less than 10 training instances and a non-rare set of the other 462 HOI classes are also reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Furthermore, we report the role mAP of two scenarios for V-COCO: scenario 1 includes the cases even without any objects (for the four action categories of body motions), while scenario 2 ignores these cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For HOI-SDC, we report the role mean average precision for the full set of 321 HOI classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 Implementation Details The Visual Feature Extractor consists of Swin Transformer and a deformable transformer encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For Swin-Tiny and Swin-Large, the dimensions of the feature maps in the ๏ฌrst stage are set to ๐ถ๐‘  = 96 and ๐ถ๐‘  = 192, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We pre- train Swin-Tiny on the ImageNet-1k dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Swin-Large is ๏ฌrst pre-trained on the ImageNet-22k dataset and ๏ฌnetuned Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 9: The human-object pairs with human overlap ๐ผ๐‘‚๐‘ˆโ„Ž and object overlap ๐ผ๐‘‚๐‘ˆ๐‘œ both exceeding 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5 are declared as true positives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' on the ImageNet-1k dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Then the weights are used to ๏ฌne-tune the FGAHOI for the HOI detection task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The number of both encoder and decoder layers are set to 6 (๐‘๐ฟ๐‘Ž๐‘ฆ๐‘’๐‘Ÿ = 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The number of query embeddings is set to 300 (๐‘๐‘ž = 300), and the hidden dimension of embeddings in the transformer is set to 256 (๐ถ๐‘‘ = 256).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In the post-processing phase, the ๏ฌrst 100 HOI instances are selected according to object con๏ฌdence, and we use ๐›ฟ=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5 to ๏ฌlter the HOI instances by the combined ๐ผ๐‘‚๐‘ˆ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Following Deformable- DETR [34], the AdamW [64] optimizer is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The learning rates of the extractor and the other components are set to 10โˆ’5 and 10โˆ’4, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We use 8 RTX 3090 to train the model (QAHOI & FGAHOI) with Swin-Tiny.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For the model with Swin-Largeโˆ— +, we use 16 RTX 3090 to train them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For HICO-DET and HOI-SDC, we train the base network for 150 epochs and carry out the learning rate drop from the 120th epoch at the ๏ฌrst stage of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For subsequent training, we trained the model for 40 epochs, with a learning rate drop at the 15th epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For V-COCO dataset, we train the base network for 90 epochs and drop the learning rate from 60th epoch at the ๏ฌrst stage of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For subsequent training, we trained the model for 30 epochs, with a learning rate drop at the 10th epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4 Comparison with State-of-the-Arts 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 HICO-DET We compare FGAHOI with the state-of-the-art two-stage and one-stage methods on the HICO-DET dataset and report the results in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' FGAHOI outperforms both state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In contrast to the state-of-the-art two-stage method SCG [18], FGAHOI with Swin-Large*+ backbone exceeds an especially signi๏ฌcant gain of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='85 mAP in default full setting, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='99 mAP in default rare setting, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='8 mAP in default non-rare setting, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='56 mAP in known object full setting, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='75 mAP in known rare settings and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='52 mAP in known object non-rare setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For a fair comparison, we used the same machine for the reproduction of the QAHOI (as shown in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 QAHOI(R)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In comparison to the state- of-the-art one-stage method QAHOI, FGAHOI exceeds it in all settings for all backbone networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For Swin-Tiny backbone network, FGAHOI exceeds an especially signi๏ฌ- cant gain of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='27 mAP in default full setting, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='02 mAP in default rare setting, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='55 mAP in default non-rare setting, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='42 mAP in known object full setting, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='11 mAP in known rare settings and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='79 mAP in known object non-rare setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In addition, FGAHOI with Swin-Large*+ backbone exceeds an especially signi๏ฌcant gain of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='75 mAP in default full OU IOU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ground-truth label Prediction boxesJOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 11 TABLE 5: Comparison on ten intervals of the two proposed challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We divide the HICO-DET dataset into ten intervals based on each of the two challenges and compare the performance of QAHOI and FGAHOI on each interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Challenge Method Backbone mAProle (โ†‘) IMI0 IMI1 IMI2 IMI3 IMI4 IMI5 IMI6 IMI7 IMI8 IMI9 UDA QAHOI Swin-Tiny 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='35 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='72 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='79 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='70 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='21 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='13 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='60 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='66 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='19 Swin-Largeโˆ— + 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='53 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='58 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='11 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='41 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='44 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='43 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='25 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='53 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='12 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='08 FGAHOI Swin-Tiny 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='74 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='85 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='20 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='46 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='54 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='55 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='32 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='50 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='44 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='17 Swin-Largeโˆ— + 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='69 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='85 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='51 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='50 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='89 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='95 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='33 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='04 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='70 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='42 LDVM QAHOI Swin-Tiny 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='33 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='43 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='57 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='00 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='06 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='87 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='81 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='25 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='03 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='29 Swin-Largeโˆ— + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='82 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='08 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='56 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='53 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='42 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='87 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='94 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='38 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='31 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='15 FGAHOI Swin-Tiny 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='50 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='15 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='34 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='58 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='83 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='61 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='64 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='07 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='31 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='07 Swin-Largeโˆ— + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='44 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='32 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='57 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='81 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='82 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='92 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='50 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='66 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='26 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='55 TABLE 6: We carefully ablate each of the constituent component of FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The middle results denote the role mAP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The results in the top right corner represent the performance improvement compared to QAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The results in the bottom right corner represent the performance improvement compared to the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Method Merging Mechanism Default Known Object Hierarchical Spatial-Aware Task-Aware Full โ†‘ Rare โ†‘ Non-Rare โ†‘ Full โ†‘ Rare โ†‘ Non-Rare โ†‘ QAHOI 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='67 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='22 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='69 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='06 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='95 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='18 FGAHOI \x17 \x17 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='45( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='78 ) ( ) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='07( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='85 ) ( ) 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='66( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='97 ) ( ) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='08( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='02 ) ( ) 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='02( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='01 ) ( ) 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='19( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='07 ) ( ) \x14 \x17 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='60( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='93 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='15 ) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='39( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='17 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='32 ) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='76( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='07 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='10 ) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='07( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='01 ) ( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='99 ) 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='48( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='53 ) ( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='46 ) 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='34( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='16 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='15 ) \x17 \x14 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='32( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='65 ) ( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='87 ) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='34( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='12 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='27 ) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='41( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='72 ) ( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='75) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='81( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='75 ) ( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='73) 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='30( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='35 ) ( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='28) 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='05( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='87 ) ( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='86) \x14 \x14 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='94( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='27 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='49 ) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='02 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='17 ) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='55 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='58 ) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='48( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='42 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='40 ) 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='16( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='21 ) ( +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='14 ) 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='97( +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='79 ) ( +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='78 ) setting, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='49 mAP in default rare setting, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='82 mAP in default non-rare setting, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7 mAP in known object full setting, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='92 mAP in known rare settings and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='93 mAP in known object non-rare setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 HOI-SDC On the dataset we propose, ๐‘–.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๐‘’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', HOI-SDC, we compare FGAHOI with QAHOI and ablate each component of FGA- HOI (As shown in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The backbone is set to Swin-Tiny.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The baseline exceeds QAHOI an especially signi๏ฌcant gain of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='63 mAP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' HSAM and TAM improve a signi๏ฌcant gain of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='73 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='66 mAP, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Bene๏ฌt from the MSS, HSAM and TAM, FGAHOI achieve 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='25 mAP on HOI- SDC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 V-COCO We compare FGAHOI with the state-of-the-art methods on V-COCO dataset and report the results in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In comparison to QAHOI, FGAHOI only exceeds a small margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' This phenomenon is mainly caused by too little training data in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We investigate that FGAHOI cannot adequately perform when the training data is not suf๏ฌcient due to the complex task requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In addition, we investigate the transformer backbone is still superior to CNN backbone in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5 Sensitivity Analysis for UDA and LDVM According to the two proposed challenges, we divide the HICO-DET into ten intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' At each intervals, we compare FGAHOI and QAHOI with Swin-Tiny, Largeโˆ— + backbone, respectively (As shown in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' When compared be- tween each interval of UDA and LDVM, we investigate that the dif๏ฌculty of HOI detection decreases as the interval level increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' This justi๏ฌes the original design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Thus, it is imperative to consider ability of the model to address these two challenges when proposing novel frameworks for HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In the comparison between FGAHOI and QAHOI, the results demonstrate that FGAHOI has better capability for uneven distributed area and long distance visual modeling of human-object pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6 Qualitative Analysis 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 Visualized Results In order to demonstrate our model, several representative HOI predictions are visualized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='5, our model can pinpoint HOI instances from noisy backgrounds and excels at detecting various complicated HOIs, including one object interacting with different humans, one human engaging in multiple interactions with various objects, mul- tiple interactions within a single pair, and multiple humans engaging in various interactions with various objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In addition, our model is good at long-range visual modelling, withstanding the impacts of hostile environments and small target identi๏ฌcation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6 (a) illustrates that FGAHOI has excellent long-range visual modelling capabilities and can accurately identify interactions between human-object pairs far from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' As Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6 (b) shows, our model has JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' JANUARY 2023 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='text_on cell_phone ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='talk_on cell_phone ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='eat orange ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='open book ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='cut with knife ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='repair hair_drier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold hotdog ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hop_on elephant ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='kick sports_ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold cup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='carry handbag ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='jump skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold cup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='drink with cup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='text_on cell_phone ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='talk_on cell_phone ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='eat orange ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='open book ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='cut with knife ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='repair hair_drier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold hotdog ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hop_on elephant ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='kick sports_ball ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold cup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='carry handbag ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='jump skateboard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='hold cup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='drink with cup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(a) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(b) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(c) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Low ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='High ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Read ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Laptop ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Read ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Laptop ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Low ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='High ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Read ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Laptop ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Exit ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Airplane ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Sit on ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Airplane ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Hold ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Horse ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Ride ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Horse ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Fly Kite ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(a) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(b) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='(c) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Level_2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Low ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='High ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Read ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Laptop ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Exit ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Airplane ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Sit on ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Airplane ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Hold ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Horse ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Ride ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Horse ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Fly Kite ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10: Visualization of ๏ฌne-grained anchors in the decoding phase, Level 0, Level 1 and Level 2 represent the features at different scales respectively, the color of the blue dots from light to dark represents the degrees of attention of the ๏ฌne- grained anchors and red dots represent the positions of interest of ๏ฌne-grained anchors in current scale features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' outstanding robustness and can effectively resist disruption from harsh environmental factors, including blurring, block- ing and glare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6 (c) demonstrates the superior capabili- ties of FGAHOI to identify small HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 What do the ๏ฌne-grained anchors look at?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7, we compare the ๏ฌne-grained anchors of FGAHOI and QAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' First two HOI instances (๐‘–.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๐‘’, hold sport ball and ride motorcycles) exhibit that FGAHOI could better focus on humans, objects and the interaction areas rather than noisy backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The fourth head of FGAHOI still focuses on the HOI instance, while QAHOI focuses on the background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' When detecting instance with a long distance between human and object, FGAHOI could focus on the right position, while QAHOI is like a chicken with its head cut off (As shown in the last HOI instance).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To exhibit the effectiveness of the ๏ฌne-grained anchors for identifying HOI instances and demonstrate the working mechanism of ๏ฌne-grained anchors, we visualize the ๏ฌne- grained anchors of the feature maps at different scales in the decoding phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='10 (a), we visualize the instances of two different humans and one object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='10 (b), even for exactly the same human-object pair, the areas of focus vary from one interaction to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='10 (c), ISไบบ D2MVAD2MVAD2MVAD2MVAJOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 13 text_on cell_phone talk_on cell_phone eat orange ride bicycle cut with knife repair hair_drier hold hotdog hop_on elephant kick sports_ball hold cup carry handbag jump skateboard hold cup drink with cup text_on cell_phone talk_on cell_phone eat orange ride bicycle cut with knife repair hair_drier hold hotdog hop_on elephant kick sports_ball hold cup carry handbag jump skateboard hold cup drink with cup Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 11: Visualization of several representative interactive actions and the corresponding ๏ฌne-grained anchors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We only visualize a single representative interactive action for one human-object pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kindly refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' we show two instances contain short and long distance between humans and objects, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We investigate that the ๏ฌne-grained anchors of low level feature map focus on small and ๏ฌne-grained areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' They play a major role in detecting close range and small HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The ๏ฌne- grained anchors of high level feature maps focus on large and coarse-grained areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' It is necessary for detecting long distance and large HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In order to explore what the ๏ฌne-grained anchors focus on, we visualize several representative actions in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Visualization shows that ๏ฌne-grained anchors could con- centrate attention precisely on the location where the in- teractive action is generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For example, the ๏ฌne-grained anchors mainly focus on the hand for โ€™text on cell phoneโ€™, the mouth for โ€™eat orangeโ€™ and the ear and the mouth for โ€™talk on cell phoneโ€™.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For โ€™kick sports ballโ€™, โ€™jump skate- boardโ€™ and โ€™hop on elephantโ€™, central areas of interest are around legs and feet, while ๏ฌne-grained anchors primarily focuses on hands for โ€™carry handbagโ€™, โ€™repair hair drierโ€™, โ€™hold cupโ€™, โ€™hold hotdogโ€™ and โ€™cut with kinfeโ€™.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7 Ablation Study In this subsection, a set of experiments are designed to clearly understand the contribution of each of the con- stituent components of the proposed methodology: Merg- ing mechanism, Multi-Scale Sampling Strategy and Stage- wise Training Strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We conducted all experiments on the HICO-DET dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1 Ablating FGAHOI Components To study the contribution of each of the merging mecha- nisms in FGAHOI, we design careful ablation experiments in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' To ensure a fair comparison, the sampling sizes are all set to [1, 3, 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For the baseline which does not lever- ages the hierarchical spatial-aware and task-aware merging mechanism, we use the average and direct summation op- eration to merge the sampled features and connect embed- dings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' For the results in the table, the middle results denote the role mAP, the results in the top right corner represent the performance improvement compared to QAHOI and the results in the bottom right corner represent the performance improvement compared to the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In comparison to row 1 (QAHOI), row 2 adds the multi-scale sampling strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The results demonstrate that adding the sampling strategy improves the ability of the model to detect HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The row 3 and 4 show that both hierarchical spatial-aware and task-aware merging mechanism make an essential contribution to the success of FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The hierarchical spatial-aware merging mechanism, combined with the task-aware merging mechanism performs better together (row 5) than using either of them separately (row 3 and 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Thus, each component in FGAHOI has a critical role to play in HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2 Sensitivity Analysis On Multi-Scale Sampling Sizes Our multi-scale sampling strategy samples multi-scale fea- tures according to the pre-determined sampling sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We vary different sampling sizes to conduct the sensitivity EJOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 14 analysis for the sampling strategy and report the results in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We ๏ฌnd that the sampling strategy is relatively stable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Changes in sampling sizes do not have a signi๏ฌcant impact on the performance of FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' However, there is still a slight degradation in the performance of FGAHOI as the sample size increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We investigate that as the sample size increases, too many background features around the ๏ฌne-grained anchors are sampled, resulting in contamina- tion of the sampled features and thus the performance of the model suffers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Hence, for validation, we set the sampling sizes to [1, 3, 5] in all our experiments, which is a sweet spot that balances performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' TABLE 7: Comparison between different sampling sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Smpling Size Default Known Object Full Rare Non-Rare Full Rare Non-Rare [ 1, 3, 5 ] 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='94 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='48 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='16 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='97 [ 3, 5, 7 ] 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='72 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='03 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='72 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='33 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='67 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='30 [ 5, 7, 9 ] 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='65 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='64 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='74 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='55 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='64 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='62 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='3 Training Strategies As shown in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='8, we leverage the stage-wise and end- to-end training strategy to train FGAHOI, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In the end-to-end training strategy, we train FGAHOI for 150 epochs and the learning rate drop is carried out at the 120th epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' The stage-wise training strategy promotes 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='96 mAP for default full setting, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='61 for default rare, 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='36 for default non-rare, 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='04 for known object full, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='65 for known object rare and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='46 mAP for known object non-rare setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' In comparison to the end-to-end training strategy, we in- vestigate that the stage-wise training strategy reduces the learning dif๏ฌculty of the FGAHOI and clarify the learning direction of the model by emphasizing it to learn what it needs at each stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' TABLE 8: Comparison between Stage-Wise and End-to-End training approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Training Strategy Default Known Object Full Rare Non-Rare Full Rare Non-Rare Stage-Wise 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='94 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='48 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='16 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='97 End-to-End 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='98 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='63 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='88 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='44 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='51 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='51 6 CONCLUSION In this paper, we propose a novel transformer-based human- object interaction detector (FGAHOI) which leverages the input features to generate ๏ฌne-grained anchors for protect- ing the detection of HOI instances from noisy backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We propose a novel training strategy where each component of the model is trained sequentially to clarify the training direction at each stage, for maximizing the savings of the training cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' We propose two novel metrics and a novel dataset, ๐‘–.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='๐‘’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', HOI-SDC for the two challenges (Uneven Dis- tributed Area in Human-Object Pairs and Long Distance Visual Modeling of Human-Object Pairs) of detecting HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Our extensive experiments on three benchmarks: HICO-DET, HOI-SDC and V-COCO, demonstrate the effec- tiveness of the proposed FGAHOI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Speci๏ฌcally, FGAHOI outperforms all existing state-of-the-art methods by a large margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' ACKNOWLEDGMENTS This work is supported by National Natural Science Foun- dation of China (grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='61871106 and No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='61370152), Key R&D projects of Liaoning Province, China (grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='2020JH2/10100029), and the Open Project Program Foundation of the Key Laboratory of Opto-Electronics In- formation Processing, Chinese Academy of Sciences (OEIP- O-202002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' REFERENCES [1] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Girshick, โ€œFast r-cnn,โ€ in Proceedings of the IEEE international conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 1440โ€“1448, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [2] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhou, โ€œFssd: feature fusion single shot multibox detector,โ€ arXiv preprint arXiv:1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='00960, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [3] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ren, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' He, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Girshick, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Sun, โ€œFaster r-cnn: Towards real- time object detection with region proposal networks,โ€ Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 28, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [4] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Girshick, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Donahue, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Darrell, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Malik, โ€œRich feature hier- archies for accurate object detection and semantic segmentation,โ€ in Proceedings of the IEEE conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 580โ€“587, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Redmon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Divvala, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Girshick, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Farhadi, โ€œYou only look once: Uni๏ฌed, real-time object detection,โ€ in Proceedings of the IEEE conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 779โ€“788, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [6] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Qian, โ€œRefor- mulating hoi detection as adaptive set prediction,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 9004โ€“9013, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [7] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Hou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Peng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Qiao, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Tao, โ€œVisual compositional learn- ing for human-object interaction detection,โ€ in European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 584โ€“600, Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [8] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhou and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chi, โ€œRelation parsing neural network for human- object interaction detection,โ€ in Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 843โ€“851, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [9] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kim, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kim, โ€œHotr: End- to-end human-object interaction detection with transformers,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 74โ€“83, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [10] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ding, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Qu, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Tao, โ€œPolysemy deciphering net- work for robust humanโ€“object interaction detection,โ€ International Journal of Computer Vision, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 129, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 1910โ€“1929, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [11] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhong, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Qu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ding, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Tao, โ€œGlance and gaze: Inferring action-aware points for one-stage human-object interaction detec- tion,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 13234โ€“13243, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [12] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zou, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Huang, โ€œDrg: Dual relation graph for human-object interaction detection,โ€ in European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 696โ€“712, Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [13] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ulutan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Iftekhar, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Manjunath, โ€œVsgnet: Spatial attention network for detecting human object interactions using graph convolutions,โ€ in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 13617โ€“13626, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [14] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zou, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Huang, โ€œican: Instance-centric attention network for human-object interaction detection,โ€ arXiv preprint arXiv:1808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='10437, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [15] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Danelljan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Khan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Sun, โ€œLearning human-object interaction detection using interaction points,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 4116โ€“4125, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gao, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, โ€œMin- ing the bene๏ฌts of two-stage and one-stage hoi detection,โ€ Ad- vances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 17209โ€“ 17220, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [17] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zou, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhao, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wei, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', โ€œEnd-to-end human object interaction detection with hoi transformer,โ€ in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 11825โ€“ 11834, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 15 [18] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Campbell, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gould, โ€œSpatially condi- tioned graphs for detecting human-object interactions,โ€ in Proceed- ings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 13319โ€“13327, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Tamura, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ohashi, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yoshinaga, โ€œQpic: Query-based pairwise human-object interaction detection with image-wide con- textual information,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10410โ€“10419, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [20] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Huang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lu, โ€œTransferable interactiveness knowledge for human-object interaction detection,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3585โ€“3594, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [21] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gkioxari, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Girshick, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dollยดar, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' He, โ€œDetecting and recognizing human-object interactions,โ€ in Proceedings of the IEEE conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 8359โ€“8367, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [22] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zeng, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Deng, โ€œLearning to detect human-object interactions,โ€ in 2018 ieee winter conference on applications of computer vision (wacv), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 381โ€“389, IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [23] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gupta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Schwing, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Hoiem, โ€œNo-frills human-object interaction detection: Factorization, layout encodings, and training techniques,โ€ in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 9677โ€“9685, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [24] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' He, โ€œPose-aware multi- level feature network for human object interaction detection,โ€ in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 9469โ€“9478, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [25] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Bansal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Rambhatla, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Shrivastava, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chellappa, โ€œDe- tecting human-object interactions via functional generalization,โ€ in Proceedings of the AAAI Conference on Arti๏ฌcial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10460โ€“10469, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [26] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhu, โ€œLearning pose grammar to encode human body con๏ฌguration for 3d pose estima- tion,โ€ in Proceedings of the AAAI conference on arti๏ฌcial intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 32, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [27] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Tai, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lu, โ€œWeakly and semi supervised human body part parsing via pose-guided knowledge transfer,โ€ arXiv preprint arXiv:1805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='04310, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [28] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xiu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lu, โ€œPose ๏ฌ‚ow: Ef๏ฌcient online pose tracking,โ€ arXiv preprint arXiv:1802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='00977, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Qi, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Jia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Shen, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhu, โ€œLearning human- object interactions by graph parsing neural networks,โ€ in Proceed- ings of the European conference on computer vision (ECCV), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 401โ€“ 417, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [30] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kim, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Choi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kim, โ€œUniondet: Union-level detector towards real-time human-object interaction detection,โ€ in European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 498โ€“514, Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [31] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Qian, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Feng, โ€œPpdm: Parallel point detection and matching for real-time human-object interaction detection,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 482โ€“490, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [32] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Carion, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Massa, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Synnaeve, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Usunier, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kirillov, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zagoruyko, โ€œEnd-to-end object detection with transformers,โ€ in European conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 213โ€“229, Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [33] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yanai, โ€œQahoi: Query-based anchors for human- object interaction detection,โ€ arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='08647, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [34] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Su, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dai, โ€œDeformable detr: Deformable transformers for end-to-end object detection,โ€ arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='04159, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [35] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Bengio, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lamblin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Popovici, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Larochelle, โ€œGreedy layer-wise training of deep networks,โ€ Advances in neural informa- tion processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 19, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [36] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Hinton, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Osindero, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Teh, โ€œA fast learning al- gorithm for deep belief nets,โ€ Neural computation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 18, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 1527โ€“1554, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [37] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Rohrbach, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gordo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Feng, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kalantidis, โ€œDecoupling representation and classi๏ฌer for long- tailed recognition,โ€ arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='09217, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [38] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zeng, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Deng, โ€œLearning to detect human-object interactions,โ€ in 2018 ieee winter conference on applications of computer vision (wacv), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 381โ€“389, IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [39] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gupta and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Malik, โ€œVisual semantic role labeling,โ€ arXiv e- prints, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [40] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Scarselli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gori, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Tsoi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Hagenbuchner, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Mon- fardini, โ€œThe graph neural network model,โ€ IEEE transactions on neural networks, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 61โ€“80, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [41] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Vaswani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Parmar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Uszkoreit, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Jones, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gomez, ล.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kaiser, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Polosukhin, โ€œAttention is all you need,โ€ Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [42] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Bengio, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', โ€œConvolutional networks for images, speech, and time series,โ€ The handbook of brain theory and neural networks, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 3361, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 1995, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [43] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Cao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Hu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wei, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lin, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Guo, โ€œSwin transformer: Hierarchical vision transformer using shifted windows,โ€ in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10012โ€“10022, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [44] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wei, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zeng, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, โ€œConditional detr v2: Ef๏ฌcient detection transformer with box queries,โ€ arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='08914, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [45] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Sun, โ€œAnchor detr: Query design for transformer-based detector,โ€ in Proceedings of the AAAI conference on arti๏ฌcial intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 36, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 2567โ€“2575, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [46] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Qi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Su, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhu, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, โ€œDab-detr: Dynamic anchor boxes are better queries for detr,โ€ arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='12329, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [47] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Luo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Cui, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lu, โ€œAccelerating detr convergence via semantic-aligned matching,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 949โ€“958, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [48] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ren, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Sun, โ€œDeep residual learning for image recognition,โ€ in Proceedings of the IEEE conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 770โ€“778, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [49] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Iandola, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Moskewicz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Karayev, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Girshick, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Darrell, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Keutzer, โ€œDensenet: Implementing ef๏ฌcient convnet descriptor pyramids,โ€ arXiv preprint arXiv:1404.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='1869, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [50] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Newell, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Deng, โ€œStacked hourglass networks for human pose estimation,โ€ in European conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 483โ€“499, Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [51] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Bao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yuan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Guo, โ€œCswin transformer: A general vision transformer back- bone with cross-shaped windows,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 12124โ€“ 12134, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [52] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dai, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xiao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yuan, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gao, โ€œFocal self-attention for local-global interactions in vision transformers,โ€ arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='00641, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [53] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xia, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Pan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Song, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Huang, โ€œVision transformer with deformable attention,โ€ in Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 4794โ€“4803, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [54] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xiao, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Codella, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dai, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yuan, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, โ€œCvt: Introducing convolutions to vision transformers,โ€ in Proceed- ings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 22โ€“31, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [55] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dosovitskiy, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Beyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Kolesnikov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Weissenborn, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Unterthiner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dehghani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Minderer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Heigold, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gelly, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=', โ€œAn image is worth 16x16 words: Transformers for image recognition at scale,โ€ arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='11929, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [56] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Panda, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fan, โ€œRegionvit: Regional-to-local attention for vision transformers,โ€ arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='02689, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [57] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Huang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Song, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Huang, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Huang, โ€œNot all images are worth 16x16 words: Dynamic transformers for ef๏ฌcient image recognition,โ€ Advances in Neural Information Processing Sys- tems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 11960โ€“11973, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [58] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' He, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Gkioxari, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dollยดar, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Girshick, โ€œMask r-cnn,โ€ in Proceedings of the IEEE international conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 2961โ€“2969, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [59] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xiao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Yuan, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Zhang, โ€œDynamic head: Unifying object detection heads with attentions,โ€ in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 7373โ€“7382, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [60] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Law and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Deng, โ€œCornernet: Detecting objects as paired keypoints,โ€ in Proceedings of the European conference on computer vision (ECCV), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 734โ€“750, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [61] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Goyal, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Girshick, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' He, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Dollยดar, โ€œFocal loss for dense object detection,โ€ in Proceedings of the IEEE international conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 2980โ€“2988, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [62] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Huang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Xu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Fang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Ma, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chen, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Lu, โ€œPastanet: Toward human activity knowledge engine,โ€ in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 382โ€“391, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 10, JANUARY 2023 16 [63] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Chao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' He, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Wang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Deng, โ€œHico: A benchmark for recognizing human-object interactions in images,โ€ in Proceedings of the IEEE international conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' 1017โ€“1025, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' [64] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Loshchilov and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content=' Hutter, โ€œDecoupled weight decay regulariza- tion,โ€ arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'} +page_content='05101, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9E2T4oBgHgl3EQfogh1/content/2301.04019v1.pdf'}