diff --git "a/0dE1T4oBgHgl3EQfRgML/content/tmp_files/load_file.txt" "b/0dE1T4oBgHgl3EQfRgML/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/0dE1T4oBgHgl3EQfRgML/content/tmp_files/load_file.txt" @@ -0,0 +1,524 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf,len=523 +page_content='AI Maintenance: A Robustness Perspective Pin-Yu Chen and Payel Das IBM Research pin-yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='chen@ibm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='com and daspa@us.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='ibm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='com Abstract—With the advancements in machine learning (ML) methods and compute resources, artificial intelligence (AI) empowered systems are becoming a prevailing technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' However, current AI technology such as deep learning is not flawless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The significantly increased model complexity and data scale incur intensified challenges when lacking trustworthiness and transparency, which could create new risks and negative impacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In this paper, we carve out AI maintenance from the robustness perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' We start by introducing some highlighted robustness challenges in the AI lifecycle and motivating AI maintenance by making analogies to car maintenance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' We then propose an AI model inspection framework to detect and mitigate robustness risks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' We also draw inspiration from vehicle autonomy to define the levels of AI robustness automation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle, which is an essential milestone toward building sustainable and trustworthy AI ecosystems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Introduction Just like the indispensable role of cars in the modern world, AI-empowered technology, and ML-based systems and algorithms are bringing revolutionary changes and far-reaching impacts on our life, society, and environment, if not happening already.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' As AI models are perceived as a new “vehicle” to a better future, this article aims to stress the importance of formalizing and practicing AI maintenance from the robustness perspective, by drawing analogies in the model development and deployment between car and AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Towards achieving trustworthiness and sustain- ability for AI, this article is motivated by the fol- lowing question: Cars require regular inspection, maintenance, and continuous status monitoring, why should AI technology be any different?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Robustness in AI often entails multiple mean- ings depending on the context and use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In this article, we study robustness from the perspective of the generalization capability of an AI model in adversarial and unseen scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In general, the performance of an AI model is evaluated in the average case, by comparing the model predictions on a set of data samples to their ground-truth labels and then using the average prediction result as a performance metric, such as the top-1 classification accuracy measuring the fraction of correct model prediction on the most- likely (top-1) class over a dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In contrast, the adversarial scenario evaluates the model perfor- mance in the worst case among all possible and plausible changes (often pre-specified) to the data and AI model, by assuming a virtual adversary is in place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Moreover, the unseen scenario evaluates the model performance on new data samples that are drawn from a different data distribution than the seen data samples during training (but not necessarily the worst-case distribution), possibly caused by natural data/label shifts, and real-world observational noises, among others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The rationale for studying AI maintenance © preprint 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='03052v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='LG] 8 Jan 2023 from the robustness viewpoint is motivated by the rapidly intensified demand for inspecting and preventing failure modes for AI models, in or- der to understand the limitations and prepare AI technology for the real world against malicious attempts and contiguous data changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' According to a recent Gartner report1, 30% of cyberattacks by 2022 will involve data poisoning, model theft or adversarial examples (see [1] for an overview of these new risks centered on machine learn- ing).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' However, the industry seems underprepared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In a survey of 28 organizations spanning small and large organizations, 25 organizations did not know how to secure their AI/ML systems [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Unlike car insurances that cover damage and liability, the risk of lacking robustness in AI models can be further amplified if cyber insurance providers impose stringent requirements when the root cause is related to AI failure modes2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' More- over, AI maintenance is closely related to action plans for enhancing trustworthiness in safety- related ML applications, such as fulfilling the milestones and objectives defined in the roadmap of the European Union Aviation Safety Agency (EASA)3, the AI/ML Software as a Medical Device Action Plan defined by U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Food & Drug Administration4, and the NIST AI Risk Management Framework5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' To gain insights into AI maintenance, this ar- ticle first introduces major robustness challenges in the AI lifecycle for model development and deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Then, we make analogies of the commonality between car and AI maintenance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Finally, we propose the conceptual framework named “AI model inspector” for holistic robust- ness inspection and enhancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Similar to the definitions of driving automation for vehicle au- tonomy, we define six levels of AI robustness towards facilitating qualitative and quantitative assessment of AI technology throughout the en- tire lifecycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 1https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='gartner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='com/smarterwithgartner/ gartner-top-10-strategic-technology-trends-for-2020 2https://hbr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='org/2020/04/the-case-for-ai-insurance 3https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='easa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='europa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='eu/newsroom-and-events/news/ easa-releases-its-concept-paper-first-usable-guidance-level-1-machine-0 4https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='fda.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='gov/medical-devices/ software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device 5https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='nist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='gov/itl/ai-risk-management-framework 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Robustness Challenges in AI Lifecycle Figure 1 provides an overview of robustness inspection pipeline in the AI lifecycle (left panel) and the highlighted robustness challenges (right panel).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The AI lifecycle is recurring between two phases: model development and deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The model development phase consists of two states: (i) data collection and processing, and (ii) model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Data collection and process- ing include typical data operations such as data acquisition and labeling, feature normalization, filtering, anonymization, and data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Model training involves machine learning model selection, algorithm development, system design, and optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Between states (i) and (ii), data sanitization inspects the data fidelity and performs mitigation steps (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', deleting problematic data samples or correcting mislabeled samples) prior to model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' After model development, the AI lifecycle enters the state of (iii) model deploy- ment, in which the trainable model parameters are frozen for use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Between (ii) and (iii), performance validation inspects and reduces the gap between model training and deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' If the deployed model undergoes significant performance degra- dation, possibly due to naturally occurring data shifts or malicious attempts, the AI lifecycle will re-enter the model development phase to collect new data or update the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Between (iii) and (i), continuous monitoring inspects the perfor- mance status of the currently deployed model and gives a notice upon observing significant performance degradation or detecting anomalous events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' There are different types of robustness chal- lenges in the model development and deployment phases that can lead to model misbehavior and degraded performance, varied by their objectives, feasible actions on intervening in the AI model, and knowledge about the AI model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the ad- versarial scenario, the robustness challenges can be related to a “threat model” specifying what an attacker can know and do to compromise the AI model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the unseen scenario, the robustness challenges are associated with the domain gener- alization capability between the development and deployment phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Figure 1 (right panel) lists two highlighted robustness challenges for each 2 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Left: Schematic illustration of robustness inspection pipeline (data sanitization, performance validation, and continuous monitoring) in the AI lifecycle consisting of three major states: data collection and processing, model training, and model deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The model development phase includes data collection and processing and model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Right: Highlighted robustness challenges in the AI lifecycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the model development phase, the robustness challenges assume the training data are subject to manipulation prior to model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the model deployment phase, the robustness challenges have no access to the training data but may assume some knowledge of the deployed model such as the model architecture and the associated model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Based on the categorization of states in the AI lifecycle, the chart can be extended to incorporate other robustness challenges and other trustworthiness dimensions such as safety, privacy, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' phase, which are detailed as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Robustness challenges in development phase Data poisoning concerns the model perfor- mance when trained on noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The source of noise may come from imperfect data collection and processing such as incorrect data annotation, data bias and imbalance, and context-irrelevant spurious features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The noise may also be inten- tionally introduced to the training data by adding a set of poisoned data samples for the purpose of undermining the model performance in the de- ployment phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' For example, making the target model has low classification errors in develop- ment but high classification errors in deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Such intentional data poisoning attacks usually assume the ability to manipulate the training data and have access or some partial knowledge about the model details and training procedure [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Backdoor is a Trojan attack targeting machine learning [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' It works by injecting some pattern (a trigger) with modified labels to a subset of training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Due to the memorization effect of state-of-the-art machine learning models such as neural networks, models trained on the tampered dataset will contain a backdoor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the deployment phase, backdoored models will allow an attacker to gain control of the model output in the presence of the designated trigger, regardless of the actual content of the data input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' However, in the absence of the trigger, the backdoored model will behave like a normal model trained on the untampered training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Therefore, backdoor attacks are stealthy because the tampered model will not misbehave if the backdoor is inactivated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' This challenge can be amplified in distributed and de- centralized machine learning paradigms involving multiple parties exchanging limited information about their local private data, such as federated learning [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Robustness challenges in deployment phase The deployment phase takes a fully-tuned model in the development phase and freezes the model for subsequent data inference tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' A deployed model is called a white-box model if its details are transparent to a user (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', releasing a deep learning model with its model architecture 3 (i) Data collection & Robustness processing Challenges in Al Lifecylce Continuous Data Model Model Monitoring Sanitization Development Deployment Performance Out-of- (ili) Model (ii) Model Validation Data Adversarial Backdoor Distribution deployment training Poisoning Examples Generalization Al Lifecycle robustness inspectionand pre-trained weights).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Otherwise, if model details are unknown (or partially known) to a user, it is called a black-box (gray-box) model, such as a prediction application programming interface (API) or proprietary software that only gives model prediction results and does not reveal other details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' For robustness assessment, the white-box mode enables full-stack system debugging and internal penetration testing, while the black-box mode allows practical vulnerability and informa- tion leakage analysis based on user access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Adversarial examples are carefully crafted data samples that cause prediction evasion when compared to the original unmodified data samples [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The easiness in prediction evasion reflects the model sensitivity against small changes in data inputs, such as a human-imperceptible additive perturbation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The robustness challenges of adver- sarial examples are often associated with safety- critical and security-related AI applications, such as autonomous driving cars, identification and recognition, and malware detection because their existence can be interpreted as counter-examples that violate the required robustness constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the black-box setting, adversarial examples can be generated by iteratively modifying a data input based only on the model’s prediction output [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Out-of-distribution generalization refers to the characterization of model performance when the input data samples undergo certain semantic- preserving transformations that deviate from the seen data distribution during model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In contrast, in-distribution generalization refers to the model performance on data samples or in- stances drawn from the same distribution as the training data or environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The quest for out- of-distribution generalization is motivated by re- taining robust predictions against natural varia- tions (their effect can be either observable or hid- den).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The examples include distributional shifts between development and deployment phases, data/label drifts in online data streaming, com- mon corruptions caused by measurement/device errors, and data-invariant operations made by image rotation or scaling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' An ideal model in deployment should generalize well or has the ability to quickly recognize and adapt to unseen data samples that are out-of-distribution yet share similar contexts to the in-distribution data seen during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Analogies between Car and AI Maintenance As AI-empowered algorithms and systems are often perceived as a powerful yet mysterious tech- nology to end users, we believe making analogies to (autonomous) cars can deliver better trans- parency and a more comprehensive understand- ing of AI technology’s utilities and limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Towards formalizing and standardizing the notion of AI maintenance, we aim to draw connections to a more familiar case – car maintenance – as AI and car share many commonalities in model development and deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The development of new car models is a resource-intensive process (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', electric cars).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' It is taken for granted that essential regulatory and law requirements such as reliability and safety are fully certified throughout the development process, to avoid catastrophic failures, fatal damage, and critical product recalls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Similarly, AI model development can be quite ex- pensive, especially when it comes to the training of foundation models [8] that require pre-training on large-scale datasets with neural networks con- sisting of a massive number of trainable parame- ters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Take the Generative Pre-trained Transformer 3 (GPT-3) [9] as an example, which is one of the largest language models ever trained to date.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' GPT-3 has 175 billion parameters and is trained on a dataset consisting of 499 billion tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The estimated training cost is about 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='6 million US dollars even with the lowest priced GPU cloud on the market in 20206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Having invested so much, one would expect the resulting AI model is risk- proof and robust to be deployed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In deployment, car maintenance involves reg- ular mechanical and electrical inspection, perfor- mance testing and certification, automobile part replacement, and repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' We argue that many familiar concepts in car maintenance can be well- mapped to AI models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In what follows, we make analogies between car and AI to facilitate the consolidation of AI maintenance for robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Table 1 summarizes the key terms that share analogies between car and AI maintenance for ro- bustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In what follows, we divide those terms into four categories and discuss their connections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 6https://lambdalabs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='com/blog/demystifying-gpt-3/ 4 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Analogies between car and AI models for maintenance and robustness divided into four categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='Category ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='Car ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='AI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='Model descriptions ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='and performance ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='characterization ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='user manual ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='model specification ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='automobile parts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='machine learning modules ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='warrant ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='robustness checkpoints ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='transmission efficiency ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='memory/data/power efficiency ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='Systematic inspection ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='and monitoring ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='collision test & safety report ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='internal robustness assessment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='mechanical and electrical inspection ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='penetration testing and debugging ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='problematic status warning ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='operational errors ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='health state monitoring ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='model behavior tracking ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='Fix and update ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='repair ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='model fix and update ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='wheel alignment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='model calibration ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='winter tire ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='model hardening ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='flat tire response ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='fast adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='Education and ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='societal impacts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='driver licence ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='AI ethics and value alignment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='sustainability ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='green and righteous AI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Model descriptions and performance characterization The “user manual” provides instructions for an AI system, with descriptions specifying nec- essary information for transparency and account- ability, such as data and model training details, privacy, usability, and impact statements regard- ing recommended uses and possible misuse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The “automobile parts” in AI means functional and configurable modules in the machine learning pipeline that can be modified and ideally stan- dardized for the ease of model fix and update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The “warrant” in AI means qualitative and quantitative performance checkpoints in the development pro- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The “transmission efficiency” in AI relates to how the model scales with data, memory, and power, such as floating-point operations per second (FLOPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Systematic inspection and monitoring During model development, the “collision test” for AI refers to internal comprehensive ro- bustness assessment, white-hat hacking, and red- teaming to identify limitations and hidden issues, similar to comprehensive road testing and car reviews.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The results can be used to generate a “safety report” providing a quantified level of robustness in adversarial and unseen scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The “mechanical and electrical inspection” for AI means penetration testing and debugging of the entire system (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', the software and hard- ware supporting AI technology) using probing and active measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The “problematic status warning” refers to real-time operational abnormal event detection during deployment, such as erro- neous instances or malfunctioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The “health state monitoring” means continuous tracking of model behaviors, such as identifying the emer- gence of adversarial threats and data drifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Fix and update After inspecting and identifying errors and risks, the “repair” for AI models refers to mit- igation strategies to fix, update, and re-certify the underlying model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The “wheel-alignment” for AI means model calibration, the “winter tire” means hardening the model with a more robust module, and the “flat tire response” means fast adaption of an AI model in the face of model performance degradation and anomalous events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Depending on the severity of the found robustness risks, user demand, and enforced regulation for AI technology, model fix and update for AI main- tenance can have differentiated services at varying costs, ranging from simple model patching and quick problem fixing, module replacement, partial model upgrade, to model rebuild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Education and societal impacts The “driver license” for AI means education on the ethics and value alignment when using AI technology, to understand its capabilities and limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The “sustainability” for AI involves gaining environmental awareness such as greener AI models with reduced energy consumption, as well as achieving positive societal impacts, in order to fulfill social responsibility and prevent possible misuse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' AI Model Inspector Towards practicing and realizing the notion of AI maintenance, in this section we propose a 5 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The AI model inspector framework consists of detection and mitigation stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The model under inspection first takes a series of robustness testing and checkpoints, including procedural and operational assessment, passive evaluation on representative datasets, and active probing by generating new instances on-the-fly to find failure modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the detection stage, the inspector extracts statistics and runs a diagnosis to identify possible risks in robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the mitigation stage, the inspector employs model fix and update to mitigate the identified robustness risks, and then re-assesses the model using the same robustness checklist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Finally, the inspector returns a risk-mitigated model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The entire process is analog to car inspection, fixing, and cleaning for car maintenance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' methodology called AI model inspector, which is a conceptual pipeline for proactive detection and mitigation of robustness issues throughout the AI lifecycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' We also highlight two case studies on different robustness challenges to illustrate how the AI model inspector can be realized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Finally, as motivated by vehicle autonomy, we define different levels of AI robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Robustness inspection: detection and mitigation Figure 2 shows the pipeline of AI model inspector consisting of two stages: detection and mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' First, a user using the AI mainte- nance service provides a model and/or some data samples for robustness inspection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The in- spection takes a series of robustness testing and checkpoints in both qualitative and quantitative manners, including procedural and operational assessment, passive model performance evalua- tion on representative datasets, and active probing by generating new instances on-the-fly to find failure modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Qualitative assessment includes soliciting system characterization and problem descriptions from the model operator to gain a comprehensive understanding of the scope and details of model development and deployment, such as what model and data are used for training, how the model is deployed, how much informa- tion is known to a user, what types of robustness challenges are of top concerns, to name a few.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Based on the qualitative assessment, quantitative analysis includes running the corresponding di- agnosis and reporting the numerical results and summary by generating and leveraging proper test cases and datasets for performance evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Specifically, in the detection stage, the inspec- tor extracts discriminative statistics and runs a diagnosis to identify possible risks in robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Then, in the mitigation stage, the inspector em- ploys model fix and update to mitigate the iden- tified robustness risks, such as model finetuning and re-training, adding or replacing some mod- ules in the AI system, and re-assessing the model using the same robustness checklist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Finally, the inspector returns a risk-mitigated model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The en- tire process is analog to car maintenance in terms of car inspection, fixing, and cleaning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The no- tion of differentiated services in car maintenance can also be mapped to the varying demand and 6 Al model for Mitigation Detection inspection 大一石 Robustness No robustness Car inspection Car fix Car wash checklist issuesfoundcost of AI maintenance, such as fast scanning, thorough inspection, quick patching, and detailed fix and update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' We note that the usage of the AI model inspector is continuous rather than one- shot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Based on the recurrence of the states in the AI lifecycle, a model will repeatedly undergo several transitions between the states of data col- lection & processing, model training, and model deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Moreover, a model can be fixed but broken again later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' This is analogous to the notion of weariness and fatigue testing in predictive car maintenance – after inspection, some parts need to be updated or replaced on a regular basis to ensure the model remains in good condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Based on the robustness challenges shown in Figure 1, we make the following two examples that realize the concept of the AI model inspector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Backdoor detection and mitigation: In the de- tection stage, the inspector adopts the Trojan net detector proposed in [10], which uses a limited number of untampered clean data samples (as few as one sample per class) to derive a dis- criminate statistic for discerning a trained neural network classifier has any hidden backdoor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The detector can even achieve data-free detection for convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' After detection, the inspector can adopt the mitigation strategy of model sanitization proposed in [11] to remove the backdoor by finetuning the model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Anomalous input detection and mitigation: Given a data input to an AI model under inspec- tion, the inspector can use internal data represen- tations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', similarity to training data), domain knowledge (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', innate data characteristics and physical rules), or external knowledge checking (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', searching and reasoning over a knowl- edge graph or a database) to determine whether the data input is anomalous or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Here, the anomaly encompasses different robustness chal- lenges, such as adversarial examples and out- of-distribution samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' For instance, the innate temporal dependency in audio data is used in [12] to detect audio adversarial examples for automatic speech recognition, and many distance metrics based on the internal data representations extracted from the model have been proposed to detect out-of-distribution samples [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In addi- tion to filtering out anomalous inputs, the in- spector can further take mitigation strategies to update the model and strengthen its robustness against anomalous inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' For instance, the self- progressing robust training method proposed in [14] can further strengthen a trained model for enhanced adversarial robustness by instructing the model to mitigate the self-discovered ambiguity during model finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Adversarial Machine Learning for Robustness Cars like the Mars Exploration Rovers can successfully execute the assigned task on new and unseen terrain because they were developed in comprehensive simulated environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' For AI models, one can incorporate the failure examples generated from model inspection tools to improve the robustness in unseen and even adversarial en- vironments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' This methodology is known as adver- sarial machine learning, by introducing a virtual adversary in the AI lifecycle to help create better and more robust models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the development phase, the role of the virtual adversary is to simu- late the out-of-distribution or worst-case scenarios and generate new challenging cases to help the model generalize better in unseen and adversarial environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' In the deployment case, the role of the virtual adversary is to employ proactive robustness evaluation and risk discovery, in order to prevent real damage and negative impacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' One typical example is adversarial training [15] which exploits self-generated adversarial exam- ples during model training to strengthen adver- sarial robustness against adversarial inputs in the deployment phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' We refer the readers to [16] for recent advances in adversarial machine learning for AI robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Roadmap towards the levels of AI robustness Inspired by the definitions for six levels of driving automation for autonomous vehicles7, we define six levels of AI robustness to facilitate technical progress tracing, risk quantification, and inspection, model auditing, and standardization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Table 2 compares the defined levels for vehicle autonomy and AI robustness, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' The level of robustness quantifies the progress in the soundness of machine intelligence for robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' As the level increases, it signifies the practice 7https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='sae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='org/standards/content/j3016 202104/ 7 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Comparisons between the levels of vehicle autonomy versus AI robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='Level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='Vehicle Autonomy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='AI Robustness ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='no driving automation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='no robustness (standard training) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='driver assistance ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='generalization under distribution shifts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='partial driving automation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='robustness against single risk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='conditional driving automation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='robustness against multiple risks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='high driving automation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='universal robustness to known risks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='full driving automation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='human-aligned and augmented robustness ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='and guarantee of robustness in a more practical ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='and comprehensive manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' For AI robustness, an increased level means broader coverage of robustness risks under consideration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' We believe formalizing the levels of AI robustness can be useful for the discussion and practice of AI standardization related to robustness, security, and safety, such as ISO/IEC JTC 1/WG 13 on Trust- worthiness8 and ISO/TC 22/SC 32/WG 14 on Safety and Artificial Intelligence9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Level 0 means the original robustness ob- tained from a standard model training process without any risk mitigation operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Level 1 concerns the generalization capability on natu- rally occurring shifted data distributions, such as maintaining robust predictions against dis- tributional changes caused by spurious features that are irrelevant to the actual semantic context (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', classifying traffic signs with altering sky backgrounds).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Level 2 considers the worst-case robustness against single risk (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', adversarial examples), and Level 3 extends to multiple risks, such as the multi-objective (but selected) ro- bustness to adversarial examples, common data corruptions, and spurious correlations [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Level 4 guarantees universal robustness to all known risks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Here universal robustness means joint ef- fectiveness on all known robustness risks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Finally, Level 5 aligns robustness with human-centered values and user feedback, and it has the capability to automatically augment new robustness that is complimentary to existing robustness require- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Depending on the requirements (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', law regulation) and contexts of the applications, dif- ferent levels can be necessitated as pre-requisite before deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' For example, some high-risk AI applications should pass the criterion of higher levels – similar to the necessary requirements 8https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='iso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='org/committee/45020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='html 9https://standards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='iteh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='ai/catalog/tc/iso/ 6ec701ad-7678-442d-b186-a84b9ba2bbdf/iso-tc-22-sc-32 for different driving automation conditions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', driving on highways versus urban environments).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' It is worth noting that the assessment of level-1 robustness can likely be accomplished by static evaluation on a representative dataset or benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' However, moving forward to level 2 and above, the validation of worst-case robustness performance also requires model intervention, such as active model scanning and probing for finding failure cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Moreover, AI model inspec- tor takes proactive steps for detecting and mit- igating potential robustness risks, which differs from existing frameworks such as Factsheets [18], Model Cards [19], and Datasheets for Datasets [20] that only employ passive model character- ization and specification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Finally, in addition to maintenance for AI, one can also adopt AI to improve maintenance, such as predictive main- tenance that takes preventive care to AI models based on historical records and risk forecasting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Concluding Remarks This article discusses a novel maintenance framework for robustness in AI technology based on analogies to the development and deployment of car models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' To instill and improve trustworthi- ness in the AI lifecycle, we propose an automated and scalable solution based on the principle of AI model inspector for detecting and mitigating potential risks when lacking robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Inspired by vehicle autonomy, we also define different AI robustness levels for formalizing, evaluating, standardizing, and regulating risk-proof AI mod- els.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' As AI technology is transforming our life, society, and environment with greater width and depth and at a faster speed than cars, we believe the quest for AI maintenance is imminent and necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Beyond robustness, the AI model in- spector framework can also be extended to incor- porate other dimensions of trustworthy AI, such as fairness, explainability, privacy, accountability, 8 and uncertainty quantification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' REFERENCES 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Liu, “Holistic adversarial robustness of deep learning models,” Proceedings of the AAAI Conference on Artificial Intelligence, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Kumar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Nystr¨om, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Lambert, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Marshall, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Goertzel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Comissoneru, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Swann, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Xia, “Adversarial machine learning-industry perspectives,” in 2020 IEEE Security and Privacy Workshops (SPW), 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 69–75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Jagielski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Oprea, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Biggio, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Nita-Rotaru, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Li, “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning,” in IEEE Symposium on Security and Privacy, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 19–35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Gu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Liu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Dolan-Gavitt, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Garg, “BadNets: Evaluating backdooring attacks on deep neural net- works,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 47 230–47 244, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Xie, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Huang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Li, “DBA: Dis- tributed backdoor attacks against federated learning,” in International Conference on Learning Representations, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Goodfellow, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Shlens, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Szegedy, “Explain- ing and harnessing adversarial examples,” International Conference on Learning Representations, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Sharma, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Yi, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Hsieh, “ZOO: Zeroth order optimization based black-box at- tacks to deep neural networks without training substitute models,” in ACM Workshop on Artificial Intelligence and Security, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 15–26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Bommasani, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Hudson, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Adeli, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Altman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Arora, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' von Arx, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Bernstein, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Bohg, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Bosse- lut, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Brunskill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', “On the opportunities and risks of foundation models,” arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='07258, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Brown, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Mann, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Ryder, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Subbiah, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Ka- plan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Dhariwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Neelakantan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Shyam, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Sastry, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Askell, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Agarwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Herbert-Voss, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Krueger, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Henighan, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Child, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Ramesh, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Ziegler, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Winter, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Hesse, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Sigler, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Litwin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Gray, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chess, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Clark, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Berner, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' McCandlish, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Radford, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Sutskever, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Amodei, “Language models are few-shot learners,” in Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 33, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 1877–1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Wang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Liu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Xiong, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Wang, “Practical detection of trojan neural networks: Data-limited and data-free cases,” in European Confer- ence on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 222–238.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Zhao, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Das, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Ramamurthy, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Lin, “Bridging mode connectivity in loss landscapes and adversarial robustness,” in International Confer- ence on Learning Representations, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Yang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Li, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Song, “Characteriz- ing audio adversarial examples using temporal depen- dency,” International Conference on Learning Repre- sentations, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Yang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Li, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Liu, “Generalized out-of-distribution detection: A survey,” arXiv preprint arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='11334, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Cheng, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Hsieh, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Das, “Self-progressing robust training,” Proceedings of the AAAI Conference on Artificial Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Madry, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Makelov, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Schmidt, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Tsipras, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Vladu, “Towards deep learning models resistant to adversarial attacks,” International Conference on Learn- ing Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Hsieh, Adversarial Robustness for Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Elsevier, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Paul and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Chen, “Vision transformers are robust learners,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 36, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 2, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 2071– 2081.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Arnold, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Bellamy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Hind, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Houde, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Mehta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Mojsilovi´c, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Nair, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Ramamurthy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Olteanu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Piorkowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=', “Factsheets: Increasing trust in ai services through supplier’s declarations of conformity,” IBM Journal of Research and Development, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 63, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 4/5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 6–1, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Mitchell, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Wu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Zaldivar, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Barnes, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Vasser- man, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Hutchinson, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Spitzer, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Raji, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Gebru, “Model cards for model reporting,” in Proceedings of the conference on fairness, accountability, and trans- parency, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 220–229.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Gebru, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Morgenstern, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Vecchione, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Vaughan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Iii, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' Crawford, “Datasheets for datasets,” Communications of the ACM, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 64, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 86–92, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'} +page_content=' 9' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE1T4oBgHgl3EQfRgML/content/2301.03052v1.pdf'}