diff --git "a/7NE3T4oBgHgl3EQfRgki/content/tmp_files/load_file.txt" "b/7NE3T4oBgHgl3EQfRgki/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/7NE3T4oBgHgl3EQfRgki/content/tmp_files/load_file.txt" @@ -0,0 +1,821 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf,len=820 +page_content='\uf020 Abstract— Motion prediction is essential for safe and efficient autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' However, the inexplicability and uncertainty of complex artificial intelligence models may lead to unpredictable failures of the motion prediction module, which may mislead the system to make unsafe decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Therefore, it is necessary to develop methods to guarantee reliable autonomous driving, where failure detection is a potential direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Uncertainty estimates can be used to quantify the degree of confidence a model has in its predictions and may be valuable for failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' We propose a framework of failure detection for motion prediction from the uncertainty perspective, considering both motion uncertainty and model uncertainty, and formulate various uncertainty scores according to different prediction stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The proposed approach is evaluated based on different motion prediction algorithms, uncertainty estimation methods, uncertainty scores, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', and the results show that uncertainty is promising for failure detection for motion prediction but should be used with caution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' INTRODUCTION Motion prediction is a hot topic in mobile robot and autonomous vehicle communities, accurate prediction of the future motion of surrounding traffic participants is fundamental to robust and reliable decision-making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Artificial intelligence (AI), especially deep learning, has been widely used in autonomous driving tasks by its advantages in dealing with complex problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' With the collection of large-scale data, the improvement of computing power and related algorithms, AI is expected to play a vital role in autonomous driving systems in the future [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' However, although AI-based motion prediction has shown statistical performance advantages, it is difficult to avoid unpredictable failures due to the inherent inexplicability and insufficient reliability of deep learning models, which may cause serious autonomous driving accidents [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' From the uncertainty perspective, motion prediction faces the dual challenge of uncertainty from the environment and the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Drivers, pedestrians, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' in the environment have uncertainty in their intentions and movements, which makes it difficult to accurately predict their future in all scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Additionally, due to insufficient training data and training process, the model may experience serious performance degradation when faced with rare or unknown scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Research supported by the National Science Foundation of China Project: U1964203 and 52072215, and the National Key R&D Program of China:2020YFB1600303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' (Corresponding authors: Hong Wang) Wenbo Shao, Liang Peng, Jun Li and Hong Wang are with School of Veh icle and Mobility, Tsinghua University, Beijing 100084, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' (e-mail: {sw b19, peng-l20}@mails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='tsinghua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='cn;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' {lijun1958, hong_wang}@tsinghua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='cn) Yanchao Xu is with School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' (e-mail: 3120200410@bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='cn) Failure Detector Main Model Maneuver Classifier Trajectory Predictor Graph Model UM UT Is there a wrong maneuver classification or trajectory prediction?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Uncertainty-based failure detection for motion prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' UM, UT are the uncertainty scores extracted for maneuver classification and trajectory prediction, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The failure detection, isolation, and recovery mechanism is an effective way to solve the above problems [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Among them, the study of failure detection for AI models has attracted increasing interest, which is of critical significance for the development of reliable autonomous driving systems [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 1, using the information extracted from the main model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' motion prediction model, a failure detector is built to identify maneuver classification errors and trajectory prediction errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Uncertainty, as a measure of the confidence level of the model in its output, has been used by some researchers for failure detection in tasks such as semantic segmentation [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Our study exploits various uncertainties from motion prediction and explores their usefulness for failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In this work, we concentrate on failure detection for motion prediction from the uncertainty perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The main contributions are as follows: \uf09f A framework of failure detection using uncertainty for motion prediction tasks, taking into account both motion uncertainty and model uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' \uf09f A series of uncertainty scores for failure detection formulated for different motion prediction stages and algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' \uf09f A detailed evaluation and comparison with multiple motion prediction algorithms, uncertainty estimation methods and uncertainty scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' RELATED WORK A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Motion Prediction and Motion Uncertainty Estimation Traditional motion prediction methods predict the future motion of the target agent (TA) based on its historical state by explicitly modeling kinematic models, such as Kalman Filter[5], [6], but they only apply to short-term prediction under scenarios with few interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In recent years, deep learning-based motion prediction [8]–[10] has demonstrated promising performance by simultaneously modeling TA’s historical state, its interactions with surrounding traffic participants, and other environmental information in deep Failure Detection for Motion Prediction of Autonomous Driving: An Uncertainty Perspective* Wenbo Shao, Yanchao Xu, Liang Peng, Jun Li, and Hong Wang neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' A broader review of deep learning-based motion prediction can be found in [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As for the model’s output form, some studies regard motion prediction as a multipoint regression problem [12]–[14], so as to output the unimodal predicted trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' However, due to the diversity of intentions and the uncertainty of traffic participants’ behaviors, the future trajectory distribution corresponding to one model input presents multiple possibilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Recently, increasing researchers and prediction competitions have paid attention to multimodal motion prediction, which is generally divided into two stages: maneuver or target classification, and trajectory prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Some studies [15], [16] define maneuvers as specified behavior patterns, then train the maneuver classifier through supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' For example, CS-LSTM [15] defines six maneuver modes for vehicles on highways, where longitudinal maneuvers include normal driving and braking, the lateral maneuvers include left lane change, right lane change, and lane keeping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The predicted maneuvers can serve as an important guide for future trajectory prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Other studies do not explicitly define specific behavior patterns before training, but guide the model to learn the optimal maneuver modes through model design and training process [17]–[19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' For example, Trajectron++ [18] adopts the conditional variational autoencoder (CVAE) to encode multimodality by introducing latent variables, and relies on a bivariate Gaussian Mixture Model (GMM) to model the final output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Model Uncertainty Estimation The above multimodal prediction algorithms model the uncertainty in the traffic participants’ movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In addition, deep learning models have inherent uncertainty, generally called model uncertainty or epistemic uncertainty [20], it is difficult to ignore in the real world where there are distribution shifts or out-of-distribution data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Bayesian neural network (BNN) [21]–[23] is a representative method for estimating model uncertainty, in which Bayesian inference plays an important role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Methods such as Monte-Carlo dropout [24], [25] achieve approximate inference through sampling, and they further promote the generality and popularity of BNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Besides, deep ensemble [26]–[28], as a simple and scalable method, has shown promising performance in model uncertainty estimation and thus has attracted many researchers and practitioners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As the representative method requiring only a single forward pass, evidential deep learning (EDL) [29] computes the uncertainty of the output distribution by modeling the prior distribution for the classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Failure Detection for Autonomous Driving Failure detection is attracting attention as a technology to achieve reliable autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=" It uses the main model's input, internal features, or output to diagnose whether there is a failure." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Learning-based approaches build a specialized model to act as the failure detector, and it identifies failures of the main model by using failure cases for supervised training [30]–[32] or estimating reconstruction errors [33]–[35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In addition, uncertainty-based anomaly detection has attracted some interest, such as detecting misclassified or out-of-distribution examples through maximum softmax probabilities directly output by classification networks [36] or predictive entropy quantization taking into account model uncertainty [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' However, to the best of our knowledge, most current research on failure detection for autonomous driving focuses on perception tasks, such as semantic segmentation, depth estimation, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [5], and failure detection for motion prediction models from the uncertainty perspective has been rarely discussed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Our approach utilizes both motion uncertainty and model uncertainty, proposes uncertainty scores for different stages of motion prediction, and investigates the effect of motion prediction failure detection based on different scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' METHODOLOGY A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Problem Setting Motion prediction is a task that predicts TA’s trajectory over a period of time in the future given input information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Assuming the current moment 0 t \uf03d , the input information may include TA’s historical state \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 1 2 0 [ , , , ] h h t t s s s \uf02d \uf02b \uf02b \uf03d S in the past ht timesteps, the historical state of TA’s surrounding traffic participants, and other contextual information such as maps, which are uniformly represented here by C .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Among them, \uf028 \uf029ts may contain TA’s information such as the position, speed, and category at t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The output is the predicted position ˆY of TA in the future ft timesteps: \uf028 \uf029 ˆ , f \uf03d Y S C (1) with 1 2 ˆ ˆ ˆ ˆ [ , ,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', ] ft d d d \uf03d Y consisting of the ft predicted positions ˆ td .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' For multimodal motion prediction, ˆY contains predicted trajectories under multiple maneuvers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Failure detection for motion prediction refers to identifying potential motion prediction failures by monitoring model’s state, where failures may exist in the form of maneuver misclassification or excessive error of predicted trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=" Uncertainty, as the measure of TA's behavior or model state, reflects the model's confidence in its particular output and thus has the potential to diagnose potential prediction failures." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' This work proposes to detect the performance degradation of motion prediction models, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' the decrease in the accuracy of prediction results, by quantifying the uncertainty scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=" Motion Prediction with Motion Uncertainty Estimation Due to the unavailability of TA's actual intentions and the randomness of its behavior, it may have multiple possible future trajectories." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' GRIP++ is an enhanced graph-based interaction-aware trajectory prediction algorithm, it models inter-agent interactions and temporal features but only predicts future trajectories in a single mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2, we add the maneuver classification module to GRIP++, by distinguishing different behavioral patterns to improve the authenticity and usability of the prediction results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The new method is called GRIP+++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' We focus on two-stage tasks in the proposed method: maneuver classification and maneuver-based trajectory prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In the maneuver classification stage, given the TA’s historical state and scene context, feature G are extracted through the graph convolutional model (GCN), which includes the processing of fixed and trainable graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Then TA’s maneuver \uf028 \uf029 P | z z G is inferred by multilayer perceptron (MLP), where \uf07b \uf07d 1,2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', z Z \uf0ce represents one of the defined maneuver modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In CS-LSTM [15], the modes are divided into three types of lateral maneuvers and two types of longitudinal maneuvers, but they are only applicable to vehicles driving on highway, we define a common set of maneuver modes suitable for various scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=" Specifically, TA's maneuvers are divided into four categories according to their movement direction and speed: going straight, turning left, turning right, and stopping." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In the network, we adopt the softmax head for probabilistic maneuver classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Graph Convolutional Model Maneuver Classification Module 64 ht n Trajectory Prediction Module Predicted Trajectories concat Maneuver Probabilities ht Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The architecture of GRIP+++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The maneuver-based trajectory prediction module consists of seq2seq networks taking the concatenation of the graph feature G and the feature vector transformed by the maneuver z as input, and outputs the future trajectory ˆ z Y under the maneuver z .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' To compare the generality of uncertainty-based failure detection in different motion prediction mechanisms, we employ another two classes of typical prediction algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Firstly, we focus on multimodal trajectory prediction based on the generative model, so we adopt Trajectron++ [18], it utilizes the CVAE-based latent network framework to model multimodal future trajectories, where the discrete categorical latent variable z encodes high-level behavior patterns: \uf07b \uf07d 1,2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', P( , ) P ( , , )P ( , ) ˆ ˆ ˆ z Z z z \uf0ce \uf03d \uf0e5 ψ θ S C S C Y Y Y S C ∣ ∣ ∣ (2) where θ , ψ are deep neural network parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Furthermore, we use PGP [16] as a comparison, it is a multimodal trajectory prediction method combining graph traversal, latent vector sampling, and clustering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' It models discrete policy for graph traversal by representing HD maps as lane graphs, and implements diverse trajectories prediction combined with a random sampling of latent vectors for longitudinal variability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Furthermore, it uses K-means clustering to obtain Z predictive trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' With its clever design, PGP achieved the state-of-the-art results on almost all metrics of the nuScenes leaderboard when proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Model Uncertainty Estimation As mentioned above, deep ensemble has certain advantages in model uncertainty estimation, so we design a prediction approach that simultaneously integrates model uncertainty and motion uncertainty estimation based on it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Specifically, we use random initialization of the model parameters and random shuffling of the training data to train K homogeneous and heterogeneous models, then estimate uncertainty based on the K set of output ˆ k Y , \uf07b \uf07d 1,2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', k K \uf0ce .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In addition, EDL, as a method to capture multiclass uncertainties with low computational cost, is also exploited to estimate the model uncertainty of the maneuver classification module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Specifically, the Dirichlet distribution is considered the prior distribution for the classification: 1 1 1 P for P (P| ) ( ) 0 otherwise z Z z Z z D B \uf061 \uf02d \uf03d \uf0ec \uf0ce \uf0ef \uf03d \uf0ed \uf0ef\uf0ee \uf0d5 α α (3) where 1 [ ,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', ] Z \uf061 \uf061 \uf03d α are the distribution parameters, 1 z z e \uf061 \uf03d \uf02d is the evidence, and Z is the Z-dimensional unit simplex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Uncertainty Scores Design In our work, different uncertainty scores are proposed for failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Considering the different problem forms of maneuver classification and trajectory prediction tasks, we formulate corresponding scores for both.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' For maneuver classification task combined with deep ensemble,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' we formulate the following uncertainty scores referring to the definition in [37]: Total entropy (TE) for maneuver classification is quantified to represent the total uncertainty considering both model uncertainty and the motion uncertainty: \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 P 1 1 TE= P | ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' P | ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' K k k z K z \uf03d \uf0e9 \uf0f9 \uf0e9 \uf0f9 \uf0e9 \uf0f9 \uf03d \uf0ea \uf0fa \uf0eb \uf0fb \uf0eb \uf0fb \uf0eb \uf0fb \uf0e5 θ S C θ S C θ ∣ (4) where k θ are the parameters of the kth model of deep ensemble,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' represents the formula for calculating entropy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' represents the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Data entropy (DE) for maneuver classification is quantified to represent the average of data uncertainty from different models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The larger the value, the higher the motion uncertainty estimated by deep ensemble-prediction models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 P 1 1 DE | , , P | , , K k k K z z \uf03d \uf03d \uf0e9 \uf0f9 \uf03d \uf0e9 \uf0f9 \uf0eb \uf0fb \uf0eb \uf0fb \uf0e5 θ S C θ S C θ ∣ (5) Mutual Information (MI) is quantified to represent the model uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As it increases, the degree of difference between the prediction results of multiple models increases, which to a certain extent reflects the reduction of the confidence of the models in their classification results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' MI , , , TE DE z \uf03d \uf03d \uf02d \uf0e9 \uf0f9 \uf0eb \uf0fb θ S C ∣ (6) The maximum predicted probability [38] is also considered and its inverse (negative maximum softmax probability, NMaP) is calculated as an uncertainty score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As for the EDL-based method, the above-discussed types of uncertainty scores are also quantified for comparison, and their formulas are derived according to (3)-(6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Additionally, we consider the metrics suggested in [29]: 1 u Z z z Z \uf061 \uf03d \uf03d \uf0e5 (7) Trajectory prediction involves multiple trajectories output by one or more models, where each trajectory contains position information for multiple future moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Referring to the usual error metrics [8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 12,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 18],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' average displacement error (ADE) and final displacement error (FDE),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' we define two basic metrics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' average predictive entropy (APE) and final predictive entropy (FPE),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' to represent the uncertainty formed by multiple trajectories: 1 =1 l ˆ A 1 1 1 ˆ ( n2 1) ln 2 PE= f f t t i i f f t t t t d \uf070 \uf03d \uf0f9 \uf0e9 \uf0f9 \uf02b \uf02b \uf0eb \uf0e9 \uf0fb \uf03d \uf0ea \uf0fa \uf0eb \uf0fb \uf053 \uf0e5 \uf0e5 (8) \uf028 \uf029 l ˆ FP 1 ˆ n2 1 ln 2 E f f t td \uf070 \uf03d \uf0e9 \uf0f9 \uf03d \uf02b \uf02b \uf053 \uf0eb \uf0fb (9) where for different predicted trajectories of the same input,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' the predicted position ˆ td at the same time is assumed to follow a two-dimensional Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Based on the above two basic metrics, different types of uncertainty scores are defined according to the source of different predicted trajectories (such as different sub-models, different maneuvers, or both), which may represent model uncertainty, motion uncertainty, or both.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' EXPERIMENTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Experimental Setup 1) Model Implementation: For the training of GRIP+++, inspired by [15], we adopt a two-stage training approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In the first stage, we focus on improving the trajectory prediction accuracy under the real maneuver, by training the model as a regression task at each time: , 1 ˆ 1 ft t z t reg f t L t \uf03d \uf02d \uf03d \uf0e5 Y Y (10) where ,ˆ t z Y and t Y are predicted positions for true maneuver z and ground truth at time t respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In the second stage, we additionally consider the loss of maneuver classification by adding the cross-entropy loss: reg man L L L \uf06c \uf03d \uf02b (11) where \uf028 \uf029 \uf028 \uf029 log P , | man L z \uf03d \uf02d S C , \uf06c is the weighting factor, and z is the true maneuver label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Besides, in the implementation of GRIP+++, the trajectories are sampled at 2Hz, with an observation length of 3s and a prediction horizon of 3s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As for the implementation of Trajectron++ [18] and PGP [16], we follow their original model design and training scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' For deep ensemble, we set 5 K \uf03d , a scheme considered cost-controllable and sufficiently efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' To achieve EDL, referring to [29], we incorporate a Kullback-Leibler (KL) divergence term into our loss function to avoid unnecessary uncertainty reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2) Dataset: The proposed motion prediction models and failure detectors are trained and validated on real traffic datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Specifically, GRIP+++ and its failure detectors are trained on SinD and tested on SinD and INTERACTION, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Trajectron++, PGP and their failure detection experiments are carried out on the nuScenes dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The SinD [39] dataset consists of 13248 recorded trajectories from a signalized intersection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The traffic participant classes include car, truck, bus, tricycle, bike, motorcycle, and pedestrian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The INTERACTION [40] dataset contains motion data collected in four categories of scenarios, where we adopt the TC_intersection_VA (VA) subset that also belongs to signalized intersection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' It provided 3775 trajectories for around 60 minutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The nuScenes [41] dataset is a large-scale self-driving car dataset with 1000 scenes, each scene contains 20s object annotations and HD semantic maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 3) Evaluation methodology: We set the evaluation methodology separately for the failure detection for the two-stage prediction task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Maneuver classification is a classification task, a good failure detector is considered to assign higher uncertainty scores to misclassified cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Therefore, we adopt the area under the receiver operating characteristic curve (AUROC) as the basic evaluation metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' However, AUROC does not reflect the impact of the addition of the uncertainty estimation module on the original prediction algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Therefore, we also plot the cut-off curve to evaluate the average accuracy of the remaining data after filtering out a certain percentage of data in descending order of uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The area under the cut-off curve (AUCOC) is regarded as an overall evaluation of the prediction model with the failure detector, with a larger value indicating better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' For trajectory prediction tasks, AUROC is not suitable, we use the cut-off curve as the evaluation methodology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Unlike maneuver classification, the curve here is drawn by calculating the average prediction error of the remaining data, so a smaller AUCOC represents better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Failure Detection for Maneuver Classification Regarding failure detection for maneuver classification, we set up several experiments to answer the following questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Uncertainty distribution for correctly classified and misclassified samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Experimental results of GRIP+++ based on deep ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' How different are the distributions of uncertainty scores for correct and misclassified cases?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' An effective uncertainty-based failure detector is built on the assumption that the uncertainty score level has a strong correlation with the correctness of the prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 3, the uncertainty scores of the correctly predicted maneuvers are generally relatively low, while the incorrectly predicted cases generally have high uncertainty scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Meanwhile, there is a relatively obvious separation between the two distributions, especially for TA, DA, and NMaP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Therefore, it is preliminarily inferred that the uncertainty scores have the potential for failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Differences between different uncertainty scores for failure detection?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As indicated previously, in the deep ensemble-based maneuver classification network, we can extract various uncertainty scores, here we set up experiments to compare the effects of different scores as the reference for failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The second row of TABLE I shows the results, NMaP, TE, and DE achieve better failure detection performance when used as uncertainty scores, where the total uncertainty considering both motion and model uncertainty is slightly better than the motion uncertainty alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' NMaP is relatively simple to calculate and has a strong detection ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Furthermore, although MI, which represents the model uncertainty, reflects the reduced confidence of the model when faced with unknown scenarios (as in TABLE II), its performance is relatively weak when used alone as the reference for failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 4, the cut-off curve and AUCOC corresponding to different uncertainty scores are further compared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Their performance has a great advantage over the random filtering method and is close to the optimal situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' And the relative relationship between different uncertainty scores is consistent with TABLE I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' TABLE I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' AUROC(↑) FOR MANEUVER CLASSIFICATION STAGE OF GRIP+++ TE DE MI NMaP u Ensemble 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='911 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='903 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='864 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='918 Model 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='871 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='867 Model 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='868 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='864 Model 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='871 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='867 Model 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='868 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='864 Model 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='863 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='858 EDL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='912 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='909 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='911 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='912 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='910 TABLE II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' AVERAGE UNCERTAINTY OBTAINED BY DEEP ENSEMBLE-BASED GRIP+++ TRAINED ON SIND, AND TESTED ON IN-DISTRIBUTION DATA (SIND) AND OUT-OF-DISTRIBUTION DATA (VA), RESPECTIVELY TE DE MI NMaP SinD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='318 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='250 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='068 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='877 VA 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='303 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='198 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='879 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Cut-off curves and AUCOC (↑).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The optimal curve is drawn by directly using the classification error as a filtering reference;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' the random curve is drawn by filtering the data in random order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' TABLE III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' AUCOC (↑) FOR MANEUVER CLASSIFICATION STAGE OF GRIP+++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' MODEL I IS THE RESULT FROM THE ITH MODEL IN DEEP ENSEMBLE TE DE MI NMaP u Ensemble 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='988 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='987 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='984 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='989 Model 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='981 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='982 Model 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='980 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='981 Model 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='981 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='982 Model 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='980 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='980 Model 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='979 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='979 EDL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='978 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='978 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='978 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='978 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='978 Uncertainty scores based on deep ensemble vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' uncertainty scores based on a single model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Here, we obtain DE and NMaP from the single model in deep ensemble, and they are further used for failure detection for the maneuver classification module of the corresponding model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' From the comparison of rows 2-7 of TABLE I, although the uncertainty scores extracted from the single model has a certain failure detection ability, they are not as good as the failure detector based on deep ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In addition, it is also concluded from the comparison of rows 2-7 in TABLE III that the introduction of deep ensemble is beneficial to improve the maneuver classification performance combined with failure detector filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' How well do the EDL-based uncertainty scores perform?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As a comparison, we employ EDL to extract uncertainty scores and evaluate their performance for failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' TABLE I shows that using the uncertainty scores extracted by EDL as references for the failure detector achieves comparable results to deep ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' However, TABLE III presents that the overall accuracy after filtering the data based on these uncertainty scores is not high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' One possible reason is that the regularization term added by EDL during the training process causes a drop in the prediction performance of the main model, which in turn weakens the effect of motion prediction with failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Failure Detection for Trajectory Prediction As for failure detection for trajectory prediction, we design some experiments to answer the following questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' How well does the failure detector based on uncertainty scores from multiple trajectories perform?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' For the prediction error, considering the K predicted trajectories under the real maneuver z, we calculate the minimum (minADEz, minFDEz) and mean (meanADEz, meanFDEz) of the errors of the K trajectories, and the error of their average trajectory (ADEz, avg, FDEz, avg).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' We calculate APEz and FPEz of the above K trajectories to estimate the predictive uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' As a comparison, we calculate the uncertainty of the average trajectories of K models in different maneuvers (APEavg, FPEavg), which to some extent represent the motion uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In TABLE IV, each column represents an error metric and each row represents the corresponding uncertainty score used for failure detection (except rows 1-3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' By comparing rows 2-5 of the 2 sub-tables, APEz and FPEz have stronger failure detection potential than APEavg and FPEavg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Are the uncertainty scores extracted in the maneuver classification stage applicable to the trajectory prediction stage?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Theoretically, the uncertainty scores obtained in the maneuver classification stage represent the confidence of the model in the current scene, so it may be suitable for failure detection in the trajectory prediction stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' We conduct some experiments to explore this question, the results are recorded in rows 6-9 of the two sub-tables of TABLE IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Compared with the above trajectory uncertainty scores, the uncertainty extracted in the maneuver classification stage has limited potential for detecting high-error trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' One of the possible reasons is that the uncertainty scores calculated directly based on the trajectories imply the consideration of information such as the velocity and acceleration of the object, thus having a greater correlation with the trajectory error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' How is the failure detection generalizing to scenarios with larger distributional shifts?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Here, we use the VA dataset to test the model trained based on SinD, results are shown in TABLE V and TABLE VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Compared with TABLE I, III, and IV, when faced with larger distributional shifts, while the reduction in the prediction accuracy of the main model leads to a worsening of AUCOC, the decrease in failure detection ability (such as AUROC) is relatively small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' TABLE IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' AUCOC (↓)/IMPROVEMENT RATIO (IR)1 (↑) FOR THE TRAJECTORY PREDICTION STAGE OF GRIP+++ minADEz meanADEz ADEz, avg Optimal 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='066 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='096 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='088 Random 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='259 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='345 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='330 APEz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='119/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='725 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='143/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='813 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='139/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='790 APEavg 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='136/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='636 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='172/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='694 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='166/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='677 TE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='170/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='459 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='228/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='469 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='218/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='464 DE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='170/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='457 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='229/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='466 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='218/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='462 MI 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='169/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='462 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='227/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='476 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='216/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='470 NMaP 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='170/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='461 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='228/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='472 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='217/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='467 minFDEz meanFDEz FDEz, avg Optimal 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='114 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='182 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='164 Random 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='522 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='718 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='686 FPEz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='249/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='670 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='301/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='779 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='293/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='754 FPEavg 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='278/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='599 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='358/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='672 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='345/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='654 TE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='361/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='395 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='493/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='420 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='471/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='413 DE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='362/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='393 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='494/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='417 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='472/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='410 MI 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='359/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='400 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='489/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='428 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='467/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='420 NMaP 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='360/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='397 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='491/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='423 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='497/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='416 TABLE V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' RESULTS FOR MANEUVER CLASSIFICATION STAGE OF GRIP+++ WITH DEEP ENSEMBLE, WHICH IS TRAINED ON SIND AND TESTES ON VA TE DE MI NMaP AUROC 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='914 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='915 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='863 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='912 AUCOC 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='978 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='978 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='971 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='978 TABLE VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' AUCOCOPTIMAL/AUCOCUNCERTAINTY (↓)/AUCOCRANDOM/IR(↑) FOR TRAJECTORY PREDICTION STAGE OF GRIP+++ WITH DEEP ENSEMBLE, WHICH IS TRAINED ON SIND AND TESTES ON VA minADEz meanADEz ADEz, avg APEz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='088/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='210/ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='445/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='656 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='125/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='238/ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='565/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='744 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='117/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='234/ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='550/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='730 minFDEz meanFDEz FDEz, avg FPEz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='158/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='991/ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='491/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='601 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='243/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='262/ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='550/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='699 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='228/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='232/ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='543/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='686 TABLE VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' AUCOCOPTIMAL/AUCOCUNCERTAINTY (↓)/AUCOCRANDOM/IR(↑) FOR TRAJECTRON++ ON NUSCENES Single model Deep ensemble (mean)minADE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='088/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='167/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='378/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='730 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='096/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='160/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='384/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='778 (mean)minFDE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='132/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='308/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='689/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='683 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='151/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='293/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='702/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='742 (mean)meanADE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='322/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='386/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='045/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='912 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='339/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='394/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='040/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='922 (mean)meanFDE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='608/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='754/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='096/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='902 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='637/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='763/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='082/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='913 minminADE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='055/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='112/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='234/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='682 meanmaxpADE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='181/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='280/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='801/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='841 TABLE VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' AUCOCOPTIMAL/AUCOCUNCERTAINTY(↓)/AUCOCRANDOM/IR(↑) FOR PGP ON NUSCENES, UC MEANS UNIFIED CLUSTERING Single model Deep ensemble (mean)minADE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='498/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='837/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='945/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='242 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='529/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='832/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='945/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='271 (mean)minFDE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='623/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='273/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='554/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='302 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='747/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='249/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='548/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='373 minminADE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='367/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='628/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='708/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='234 meanmaxpADE 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='538/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='497/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='115/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='392 minADE (uc) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='488/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='797/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='908/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='264 minFDE (uc) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='612/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='181/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='466/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='333 How well does uncertainty-based failure detection perform in generative model-based trajectory prediction?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' We adopt Trajectron++ combined with deep ensemble to extract multiple uncertainty scores as failure detection references.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The results of this investigation are provided in TABLE VII, where minADE/minFDE/meanADE/meanFDE for single model is calculated based on the 10 trajectories 1 IR is calculated by (AUCOCrandom – AUCOCuncertainty)/(AUCOCrandom – AUCOCoptimal), where AUCOCrandom, AUCOCoptimal, and AUCOCuncertainty represent the AUCOC based on the optimal sorting, the random sorting, and the uncertainty scores-based sorting, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' predicted by the single model, and the corresponding uncertainty scores for failure detection are APE/FPE/APE /FPE obtained from the 10 trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In contrast, meanminADE/meanminFDE/meanmeanADE/meanmeanFD E/minminADE/meanmaxpADE for deep ensemble are calculated based on 50 trajectories from all 5 ensemble models, where the first operator (mean/min) is for different sub-models and the second operator (mean/min/maxp) is for different maneuvers from each model’s output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The corresponding uncertainty scores for failure detection are meanAPE/meanFPE/meanAPE/meanFPE/APEall/APEmaxp, where meanAPE/meanFPE are obtained by averaging APE/ FPE from 5 sub-models, APEall is directly calculated from all 50 trajectories, APEmaxp is calculated according to the maximum probability trajectory of each model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' The results show promising performance of the uncertainty-based failure detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Can the above uncertainty-based failure detection be simply applied to any trajectory prediction algorithms?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In addition to the typical deep neural network architecture and modules, existing trajectory prediction algorithms may use various tricks, which may directly affect the uncertainty scores extracted from the output trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' We conduct some exploratory experiments with PGP, a high-performance prediction algorithm integrating special tricks including traversal, sampling, and clustering, to analyze the performance of applying the uncertainty scores obtained from the output trajectories for failure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' In addition, we apply deep ensemble to consider model uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' From the evaluation results in TABLE VIII, we conclude that the performance of direct uncertainty quantification based on output results is not very outstanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Possible reasons include operations such as sampling latent vectors from an unconstrained normal distribution or clustering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' This result reminds us that it is necessary to improve uncertainty estimation methods and scores according to the prediction algorithms’ characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' For example, we propose a framework for unified clustering based on the outputs of all sub-models of the deep ensemble, the results in the last two rows of TABLE VIII show some improvement over the original model in trajectory prediction performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' CONCLUSION In this work, we propose a framework to detect motion prediction failures from the uncertainty perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' We divide motion prediction tasks into two stages, maneuver classification and maneuver-based trajectory prediction, and formulate corresponding uncertainty scores for failure detection, where motion uncertainty and model uncertainty are both discussed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Our experiments cover the comparison of different prediction tasks, multiple prediction algorithms, different uncertainty estimation methods, and various uncertainty scores, Finally, we observe that uncertainty quantification is promising for failure detection for motion prediction, with the potential to generalize to environments with larger distributional shifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' However, it is also necessary to conduct targeted discussions and designs for different prediction algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Our future work will focus on the integration of the proposed method with safety decision making for autonomous driving, and its implementation and validation on physical vehicle platforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' REFERENCES [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Jain, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Del Pero, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Grimmett, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Ondruska, “Autonomy 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='0: Why is self-driving always 5 years away?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' arXiv, Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 09, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='08142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [2] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Plaza, “Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian,” PB2019-101402, Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [3] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Sifakis and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Harel, “Trustworthy Autonomous System Development,” ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Embed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2022, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1145/3545178.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [4] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Dennis and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Fisher, “Verifiable Self-Aware Agent-Based Autonomous Systems,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 108, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 1011–1026, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2020, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/JPROC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2991262.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [5] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Xing, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Blum, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Siegwart, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Cadena, “See Yourself in Others: Attending Multiple Tasks for Own Failure Detection,” in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 8409–8416.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/ICRA46639.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='9812310.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [6] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Wan and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Van Der Merwe, “The unscented Kalman filter for nonlinear estimation,” in Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='00EX373), 2000, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 153–158.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/ASSPCC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='882463.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Cosgun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', “Towards full automated drive in urban environments: A demonstration in GoMentum Station, California,” in 2017 IEEE Intelligent Vehicles Symposium (IV), Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 1811–1818.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/IVS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='7995969.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [8] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Alahi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Goel, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Ramanathan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Robicquet, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Fei-Fei, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Savarese, “Social LSTM: Human Trajectory Prediction in Crowded Spaces,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 961–971.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [9] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', “VectorNet: Encoding HD Maps and Agent Dynamics From Vectorized Representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 11525–11533.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Gu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Sun, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Zhao, “DenseTNT: End-to-End Trajectory Prediction From Dense Goal Sets,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 15303–15312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [11] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Mozaffari, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Al-Jarrah, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Dianati, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Jennings, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Mouzakitis, “Deep Learning-Based Vehicle Behavior Prediction for Autonomous Driving Applications: A Review,” IEEE Transactions on Intelligent Transportation Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 33–47, 2022, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/TITS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='3012034.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [12] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Ying, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Chuah, “GRIP++: Enhanced Graph-based Interaction-aware Trajectory Prediction for Autonomous Driving.” arXiv, May 19, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='07792.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [13] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Mo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Huang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Xing, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Lv, “Multi-Agent Trajectory Prediction With Heterogeneous Edge-Enhanced Graph Attention Network,” IEEE Transactions on Intelligent Transportation Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 9554–9567, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2022, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/TITS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='3146300.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [14] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Djuric et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', “Uncertainty-aware Short-term Motion Prediction of Traffic Actors for Autonomous Driving,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2095–2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [15] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Deo and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Trivedi, “Convolutional Social Pooling for Vehicle Trajectory Prediction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 1468–1476.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [16] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Deo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Wolff, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Beijbom, “Multimodal Trajectory Prediction Conditioned on Lane-Graph Traversals,” in Proceedings of the 5th Conference on Robot Learning, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 203–212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [17] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Cui et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', “Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2090–2096.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/ICRA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='8793868.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [18] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Salzmann, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Ivanovic, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Chakravarty, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Pavone, “Trajectron++: Dynamically-Feasible Trajectory Forecasting with Heterogeneous Data,” in Computer Vision – ECCV 2020, Cham, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 683–700.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1007/978-3-030-58523-5_40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [19] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Hsieh, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Shih, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Lin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Chen, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Tsung, “Trajectory Prediction at Unsignalized Intersections using Social Conditional Generative Adversarial Network,” in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 844–851.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/ITSC48978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='9564441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [20] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Gawlikowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', “A Survey of Uncertainty in Deep Neural Networks.” arXiv, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 18, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='03342.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Kendall and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Gal, “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=',” in Advances in Neural Information Processing Systems, 2017, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [22] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Louizos and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Welling, “Multiplicative Normalizing Flows for Variational Bayesian Neural Networks,” in Proceedings of the 34th International Conference on Machine Learning, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2218–2227.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [23] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Blundell, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Cornebise, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Kavukcuoglu, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Wierstra, “Weight Uncertainty in Neural Network,” in Proceedings of the 32nd International Conference on Machine Learning, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 1613–1622.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [24] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Gal and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Ghahramani, “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning,” in Proceedings of The 33rd International Conference on Machine Learning, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 1050–1059.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [25] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Gal, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Hron, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Kendall, “Concrete Dropout,” in Advances in Neural Information Processing Systems, 2017, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [26] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Lakshminarayanan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Pritzel, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Blundell, “Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles,” in Advances in Neural Information Processing Systems, 2017, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [27] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Wen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Tran, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Ba, “BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning.” arXiv, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 19, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='06715.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [28] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Wenzel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Snoek, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Tran, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Jenatton, “Hyperparameter Ensembles for Robustness and Uncertainty Quantification,” in Advances in Neural Information Processing Systems, 2020, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 6514–6527.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [29] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Sensoy, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Kaplan, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Kandemir, “Evidential deep learning to quantify classification uncertainty,” in Advances in neural information processing systems, 2018, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [30] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Hendrycks, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Mazeika, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Dietterich, “Deep Anomaly Detection with Outlier Exposure.” arXiv, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 28, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='04606.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [31] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Kuhn, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Hofbauer, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Xu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Petrovic, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Steinbach, “Pixel-Wise Failure Prediction For Semantic Video Segmentation,” in 2021 IEEE International Conference on Image Processing (ICIP), Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 614–618.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/ICIP42928.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='9506552.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [32] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Rahman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Sünderhauf, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Corke, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Dayoub, “FSNet: A Failure Detection Framework for Semantic Segmentation,” IEEE Robotics and Automation Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 3030–3037, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2022, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/LRA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='3143219.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [33] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Lis, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Nakka, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Fua, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Salzmann, “Detecting the Unexpected via Image Resynthesis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 2152–2161.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [34] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Haldimann, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Blum, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Siegwart, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Cadena, “This is not what I imagined: Error Detection for Semantic Segmentation through Visual Dissimilarity.” arXiv, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 02, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='00676.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [35] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Deecke, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Vandermeulen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Ruff, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Mandt, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Kloft, “Image Anomaly Detection with Generative Adversarial Networks,” in Machine Learning and Knowledge Discovery in Databases, Cham, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 3–17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1007/978-3-030-10925-7_1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [36] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Hendrycks and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Gimpel, “A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks,” presented at the International Conference on Learning Representations, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [37] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Feng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Harakeh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Waslander, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Dietmayer, “A Review and Comparative Study on Probabilistic Object Detection in Autonomous Driving,” IEEE Transactions on Intelligent Transportation Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 9961–9980, 2022, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1109/TITS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='3096854.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [38] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Maddox, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Izmailov, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Garipov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Vetrov, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Wilson, “A Simple Baseline for Bayesian Uncertainty in Deep Learning,” in Advances in Neural Information Processing Systems, 2019, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [39] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', “SIND: A Drone Dataset at Signalized Intersection in China.” arXiv, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 06, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='02297.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [40] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Zhan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', “INTERACTION Dataset: An INTERnational, Adversarial and Cooperative moTION Dataset in Interactive Driving Scenarios with Semantic Maps.” arXiv, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 30, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content='03088.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' [41] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' Caesar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=', “nuScenes: A Multimodal Dataset for Autonomous Driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'} +page_content=' 11621–11631.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NE3T4oBgHgl3EQfRgki/content/2301.04421v1.pdf'}